Sep 12 17:45:55.849961 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:45:55.849984 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:45:55.849995 kernel: BIOS-provided physical RAM map: Sep 12 17:45:55.850001 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 17:45:55.850008 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 17:45:55.850014 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 17:45:55.850022 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 17:45:55.850029 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 12 17:45:55.850037 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 17:45:55.850044 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 17:45:55.850051 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 17:45:55.850061 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 17:45:55.850070 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 17:45:55.850078 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 17:45:55.850088 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 17:45:55.850098 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 17:45:55.850113 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:45:55.850122 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:45:55.850131 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:45:55.850140 kernel: NX (Execute Disable) protection: active Sep 12 17:45:55.850149 kernel: APIC: Static calls initialized Sep 12 17:45:55.850158 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Sep 12 17:45:55.850167 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Sep 12 17:45:55.850176 kernel: extended physical RAM map: Sep 12 17:45:55.850186 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 12 17:45:55.850193 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 12 17:45:55.850200 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 12 17:45:55.850211 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 12 17:45:55.850218 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Sep 12 17:45:55.850225 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Sep 12 17:45:55.850232 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Sep 12 17:45:55.850242 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Sep 12 17:45:55.850251 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Sep 12 17:45:55.850260 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 12 17:45:55.850269 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 12 17:45:55.850279 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 12 17:45:55.850287 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 12 17:45:55.850294 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 12 17:45:55.850304 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 12 17:45:55.850311 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 12 17:45:55.850322 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 12 17:45:55.850329 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 12 17:45:55.850336 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:45:55.850343 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 17:45:55.850353 kernel: efi: EFI v2.7 by EDK II Sep 12 17:45:55.850360 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 12 17:45:55.850367 kernel: random: crng init done Sep 12 17:45:55.850375 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:45:55.850382 kernel: secureboot: Secure boot enabled Sep 12 17:45:55.850389 kernel: SMBIOS 2.8 present. Sep 12 17:45:55.850396 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 12 17:45:55.850404 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:45:55.850411 kernel: Hypervisor detected: KVM Sep 12 17:45:55.850418 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:45:55.850425 kernel: kvm-clock: using sched offset of 6493310863 cycles Sep 12 17:45:55.850435 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:45:55.850443 kernel: tsc: Detected 2794.750 MHz processor Sep 12 17:45:55.850451 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:45:55.850458 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:45:55.850465 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 12 17:45:55.850473 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:45:55.850484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:45:55.850494 kernel: Using GB pages for direct mapping Sep 12 17:45:55.850503 kernel: ACPI: Early table checksum verification disabled Sep 12 17:45:55.850513 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 12 17:45:55.850520 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:45:55.850528 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850536 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850543 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 12 17:45:55.850551 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850558 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850567 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850577 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:45:55.850589 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:45:55.850599 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 12 17:45:55.850608 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 12 17:45:55.850617 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 12 17:45:55.850627 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 12 17:45:55.850636 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 12 17:45:55.850643 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 12 17:45:55.850650 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 12 17:45:55.850658 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 12 17:45:55.850668 kernel: No NUMA configuration found Sep 12 17:45:55.850675 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 12 17:45:55.850683 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 12 17:45:55.850690 kernel: Zone ranges: Sep 12 17:45:55.850698 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:45:55.850706 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 12 17:45:55.850715 kernel: Normal empty Sep 12 17:45:55.850723 kernel: Device empty Sep 12 17:45:55.850732 kernel: Movable zone start for each node Sep 12 17:45:55.850741 kernel: Early memory node ranges Sep 12 17:45:55.850749 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 12 17:45:55.850756 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 12 17:45:55.850763 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 12 17:45:55.850771 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 12 17:45:55.850778 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 12 17:45:55.850785 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 12 17:45:55.850793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:45:55.850800 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 12 17:45:55.850808 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:45:55.850902 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 12 17:45:55.850910 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 12 17:45:55.850918 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 12 17:45:55.850925 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:45:55.850932 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:45:55.850940 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:45:55.850947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:45:55.850955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:45:55.850965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:45:55.850975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:45:55.850983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:45:55.850990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:45:55.850998 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:45:55.851005 kernel: TSC deadline timer available Sep 12 17:45:55.851013 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:45:55.851020 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:45:55.851027 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:45:55.851044 kernel: CPU topo: Max. threads per core: 1 Sep 12 17:45:55.851051 kernel: CPU topo: Num. cores per package: 4 Sep 12 17:45:55.851059 kernel: CPU topo: Num. threads per package: 4 Sep 12 17:45:55.851067 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 17:45:55.851079 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:45:55.851087 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:45:55.851095 kernel: kvm-guest: setup PV sched yield Sep 12 17:45:55.851103 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 12 17:45:55.851113 kernel: Booting paravirtualized kernel on KVM Sep 12 17:45:55.851121 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:45:55.851129 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:45:55.851137 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 17:45:55.851144 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 17:45:55.851152 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:45:55.851160 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:45:55.851168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:45:55.851177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:45:55.851187 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:45:55.851195 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:45:55.851202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:45:55.851210 kernel: Fallback order for Node 0: 0 Sep 12 17:45:55.851218 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 12 17:45:55.851226 kernel: Policy zone: DMA32 Sep 12 17:45:55.851234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:45:55.851241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:45:55.851251 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:45:55.851259 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:45:55.851267 kernel: Dynamic Preempt: voluntary Sep 12 17:45:55.851275 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:45:55.851283 kernel: rcu: RCU event tracing is enabled. Sep 12 17:45:55.851291 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:45:55.851299 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:45:55.851307 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:45:55.851315 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:45:55.851322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:45:55.851332 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:45:55.851340 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:45:55.851348 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:45:55.851358 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:45:55.851366 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:45:55.851374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:45:55.851381 kernel: Console: colour dummy device 80x25 Sep 12 17:45:55.851389 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:45:55.851397 kernel: ACPI: Core revision 20240827 Sep 12 17:45:55.851407 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:45:55.851414 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:45:55.851422 kernel: x2apic enabled Sep 12 17:45:55.851430 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:45:55.851438 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:45:55.851446 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:45:55.851454 kernel: kvm-guest: setup PV IPIs Sep 12 17:45:55.851462 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:45:55.851470 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 12 17:45:55.851480 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 12 17:45:55.851488 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:45:55.851495 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:45:55.851503 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:45:55.851513 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:45:55.851521 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:45:55.851529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:45:55.851537 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:45:55.851547 kernel: active return thunk: retbleed_return_thunk Sep 12 17:45:55.851555 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:45:55.851563 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:45:55.851571 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:45:55.851579 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:45:55.851587 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:45:55.851595 kernel: active return thunk: srso_return_thunk Sep 12 17:45:55.851603 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:45:55.851611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:45:55.851621 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:45:55.851628 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:45:55.851636 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:45:55.851644 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:45:55.851652 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:45:55.851660 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:45:55.851680 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:45:55.851688 kernel: landlock: Up and running. Sep 12 17:45:55.851696 kernel: SELinux: Initializing. Sep 12 17:45:55.851716 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:45:55.851733 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:45:55.851751 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:45:55.851768 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:45:55.851785 kernel: ... version: 0 Sep 12 17:45:55.851795 kernel: ... bit width: 48 Sep 12 17:45:55.851803 kernel: ... generic registers: 6 Sep 12 17:45:55.851825 kernel: ... value mask: 0000ffffffffffff Sep 12 17:45:55.851833 kernel: ... max period: 00007fffffffffff Sep 12 17:45:55.851874 kernel: ... fixed-purpose events: 0 Sep 12 17:45:55.851895 kernel: ... event mask: 000000000000003f Sep 12 17:45:55.851903 kernel: signal: max sigframe size: 1776 Sep 12 17:45:55.851911 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:45:55.851919 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:45:55.851927 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:45:55.851934 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:45:55.851942 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:45:55.851950 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:45:55.851961 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:45:55.851969 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 12 17:45:55.851977 kernel: Memory: 2409224K/2552216K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 137064K reserved, 0K cma-reserved) Sep 12 17:45:55.851985 kernel: devtmpfs: initialized Sep 12 17:45:55.851992 kernel: x86/mm: Memory block size: 128MB Sep 12 17:45:55.852000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 12 17:45:55.852008 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 12 17:45:55.852016 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:45:55.852024 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:45:55.852034 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:45:55.852042 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:45:55.852049 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:45:55.852057 kernel: audit: type=2000 audit(1757699152.891:1): state=initialized audit_enabled=0 res=1 Sep 12 17:45:55.852065 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:45:55.852073 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:45:55.852081 kernel: cpuidle: using governor menu Sep 12 17:45:55.852088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:45:55.852098 kernel: dca service started, version 1.12.1 Sep 12 17:45:55.852106 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 12 17:45:55.852114 kernel: PCI: Using configuration type 1 for base access Sep 12 17:45:55.852122 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:45:55.852129 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:45:55.852137 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:45:55.852145 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:45:55.852153 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:45:55.852160 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:45:55.852170 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:45:55.852178 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:45:55.852186 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:45:55.852193 kernel: ACPI: Interpreter enabled Sep 12 17:45:55.852201 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:45:55.852208 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:45:55.852216 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:45:55.852224 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:45:55.852232 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:45:55.852242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:45:55.852440 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:45:55.852567 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:45:55.852688 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:45:55.852698 kernel: PCI host bridge to bus 0000:00 Sep 12 17:45:55.852896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:45:55.853029 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:45:55.853213 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:45:55.853352 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 12 17:45:55.853464 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 12 17:45:55.853573 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:45:55.853683 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:45:55.853876 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:45:55.854016 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 17:45:55.854137 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:45:55.854256 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 12 17:45:55.854375 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 12 17:45:55.854562 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:45:55.854701 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:45:55.854858 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 12 17:45:55.854996 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 12 17:45:55.855118 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 12 17:45:55.855293 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 17:45:55.855454 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 12 17:45:55.855589 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 12 17:45:55.855711 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 12 17:45:55.856009 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 17:45:55.856151 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 12 17:45:55.856271 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 12 17:45:55.856394 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 12 17:45:55.856513 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 12 17:45:55.856654 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 17:45:55.856780 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:45:55.856944 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 17:45:55.857076 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 12 17:45:55.857195 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 12 17:45:55.857335 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 17:45:55.857456 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 12 17:45:55.857467 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:45:55.857475 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:45:55.857483 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:45:55.857495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:45:55.857502 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:45:55.857510 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:45:55.857518 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:45:55.857526 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:45:55.857534 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:45:55.857541 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:45:55.857549 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:45:55.857557 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:45:55.857566 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:45:55.857574 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:45:55.857582 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:45:55.857589 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:45:55.857597 kernel: iommu: Default domain type: Translated Sep 12 17:45:55.857605 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:45:55.857612 kernel: efivars: Registered efivars operations Sep 12 17:45:55.857620 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:45:55.857628 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:45:55.857638 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 12 17:45:55.857646 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Sep 12 17:45:55.857653 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Sep 12 17:45:55.857661 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 12 17:45:55.857668 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 12 17:45:55.857789 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:45:55.857964 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:45:55.858100 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:45:55.858116 kernel: vgaarb: loaded Sep 12 17:45:55.858124 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:45:55.858132 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:45:55.858140 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:45:55.858148 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:45:55.858156 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:45:55.858163 kernel: pnp: PnP ACPI init Sep 12 17:45:55.858308 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 12 17:45:55.858319 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:45:55.858330 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:45:55.858338 kernel: NET: Registered PF_INET protocol family Sep 12 17:45:55.858346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:45:55.858354 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:45:55.858362 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:45:55.858370 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:45:55.858378 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:45:55.858386 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:45:55.858395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:45:55.858403 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:45:55.858411 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:45:55.858419 kernel: NET: Registered PF_XDP protocol family Sep 12 17:45:55.858571 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 12 17:45:55.858703 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 12 17:45:55.858856 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:45:55.859021 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:45:55.859139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:45:55.859256 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 12 17:45:55.859395 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 12 17:45:55.859506 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 12 17:45:55.859517 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:45:55.859525 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 12 17:45:55.859533 kernel: Initialise system trusted keyrings Sep 12 17:45:55.859541 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:45:55.859549 kernel: Key type asymmetric registered Sep 12 17:45:55.859561 kernel: Asymmetric key parser 'x509' registered Sep 12 17:45:55.859581 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:45:55.859592 kernel: io scheduler mq-deadline registered Sep 12 17:45:55.859600 kernel: io scheduler kyber registered Sep 12 17:45:55.859608 kernel: io scheduler bfq registered Sep 12 17:45:55.859616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:45:55.859625 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:45:55.859633 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:45:55.859641 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:45:55.859651 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:45:55.859660 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:45:55.859668 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:45:55.859676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:45:55.859684 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:45:55.859876 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:45:55.859896 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:45:55.860039 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:45:55.860174 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:45:55 UTC (1757699155) Sep 12 17:45:55.860289 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:45:55.860299 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:45:55.860307 kernel: efifb: probing for efifb Sep 12 17:45:55.860316 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 12 17:45:55.860324 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 12 17:45:55.860332 kernel: efifb: scrolling: redraw Sep 12 17:45:55.860340 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:45:55.860349 kernel: Console: switching to colour frame buffer device 160x50 Sep 12 17:45:55.860361 kernel: fb0: EFI VGA frame buffer device Sep 12 17:45:55.860372 kernel: pstore: Using crash dump compression: deflate Sep 12 17:45:55.860380 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:45:55.860388 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:45:55.860396 kernel: Segment Routing with IPv6 Sep 12 17:45:55.860406 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:45:55.860415 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:45:55.860423 kernel: Key type dns_resolver registered Sep 12 17:45:55.860433 kernel: IPI shorthand broadcast: enabled Sep 12 17:45:55.860442 kernel: sched_clock: Marking stable (3499004817, 141092308)->(3655406225, -15309100) Sep 12 17:45:55.860450 kernel: registered taskstats version 1 Sep 12 17:45:55.860458 kernel: Loading compiled-in X.509 certificates Sep 12 17:45:55.860467 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:45:55.860475 kernel: Demotion targets for Node 0: null Sep 12 17:45:55.860483 kernel: Key type .fscrypt registered Sep 12 17:45:55.860493 kernel: Key type fscrypt-provisioning registered Sep 12 17:45:55.860502 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:45:55.860510 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:45:55.860518 kernel: ima: No architecture policies found Sep 12 17:45:55.860527 kernel: clk: Disabling unused clocks Sep 12 17:45:55.860535 kernel: Warning: unable to open an initial console. Sep 12 17:45:55.860543 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:45:55.860552 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:45:55.860562 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:45:55.860570 kernel: Run /init as init process Sep 12 17:45:55.860578 kernel: with arguments: Sep 12 17:45:55.860587 kernel: /init Sep 12 17:45:55.860595 kernel: with environment: Sep 12 17:45:55.860603 kernel: HOME=/ Sep 12 17:45:55.860611 kernel: TERM=linux Sep 12 17:45:55.860619 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:45:55.860632 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:45:55.860646 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:45:55.860656 systemd[1]: Detected virtualization kvm. Sep 12 17:45:55.860664 systemd[1]: Detected architecture x86-64. Sep 12 17:45:55.860673 systemd[1]: Running in initrd. Sep 12 17:45:55.860681 systemd[1]: No hostname configured, using default hostname. Sep 12 17:45:55.860690 systemd[1]: Hostname set to . Sep 12 17:45:55.860699 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:45:55.860709 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:45:55.860718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:45:55.860727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:45:55.860736 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:45:55.860745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:45:55.860754 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:45:55.860764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:45:55.860777 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:45:55.860787 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:45:55.860795 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:45:55.860804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:45:55.860828 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:45:55.860836 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:45:55.860845 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:45:55.860861 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:45:55.860873 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:45:55.860882 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:45:55.860890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:45:55.860899 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:45:55.860908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:45:55.860916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:45:55.860925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:45:55.860933 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:45:55.860942 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:45:55.860953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:45:55.860961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:45:55.860971 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:45:55.860979 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:45:55.860990 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:45:55.860998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:45:55.861007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:45:55.861016 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:45:55.861027 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:45:55.861036 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:45:55.861069 systemd-journald[218]: Collecting audit messages is disabled. Sep 12 17:45:55.861092 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:45:55.861101 systemd-journald[218]: Journal started Sep 12 17:45:55.861122 systemd-journald[218]: Runtime Journal (/run/log/journal/bfcd35b2196f4afda243306c6ed9c982) is 6M, max 48.2M, 42.2M free. Sep 12 17:45:55.853296 systemd-modules-load[221]: Inserted module 'overlay' Sep 12 17:45:55.865465 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:45:55.866938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:45:55.867684 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:45:55.875126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:45:55.884844 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:45:55.886952 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 12 17:45:55.887857 kernel: Bridge firewalling registered Sep 12 17:45:55.888161 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:45:55.890483 systemd-tmpfiles[231]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:45:55.892359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:45:55.895502 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:45:55.898010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:45:55.904971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:45:55.907584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:45:55.920483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:45:55.923594 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:45:55.943008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:45:55.947926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:45:55.987288 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:45:55.990030 systemd-resolved[253]: Positive Trust Anchors: Sep 12 17:45:55.990077 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:45:55.990124 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:45:55.995550 systemd-resolved[253]: Defaulting to hostname 'linux'. Sep 12 17:45:55.999034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:45:56.003382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:45:56.136787 kernel: SCSI subsystem initialized Sep 12 17:45:56.148854 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:45:56.160854 kernel: iscsi: registered transport (tcp) Sep 12 17:45:56.183853 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:45:56.183879 kernel: QLogic iSCSI HBA Driver Sep 12 17:45:56.209745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:45:56.236097 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:45:56.240132 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:45:56.304459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:45:56.307970 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:45:56.366896 kernel: raid6: avx2x4 gen() 30155 MB/s Sep 12 17:45:56.383861 kernel: raid6: avx2x2 gen() 30968 MB/s Sep 12 17:45:56.400932 kernel: raid6: avx2x1 gen() 25092 MB/s Sep 12 17:45:56.400984 kernel: raid6: using algorithm avx2x2 gen() 30968 MB/s Sep 12 17:45:56.418955 kernel: raid6: .... xor() 18257 MB/s, rmw enabled Sep 12 17:45:56.419064 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:45:56.446893 kernel: xor: automatically using best checksumming function avx Sep 12 17:45:56.688851 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:45:56.699722 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:45:56.702752 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:45:56.750364 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 12 17:45:56.757402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:45:56.759506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:45:56.787108 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 12 17:45:56.820984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:45:56.822943 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:45:56.909274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:45:56.913324 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:45:56.959851 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:45:56.963531 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:45:56.966074 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:45:56.966110 kernel: GPT:9289727 != 19775487 Sep 12 17:45:56.968520 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:45:56.968588 kernel: GPT:9289727 != 19775487 Sep 12 17:45:56.968609 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:45:56.968619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:45:56.973873 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:45:56.978846 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 17:45:56.987869 kernel: libata version 3.00 loaded. Sep 12 17:45:56.996274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:45:56.998288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:45:57.001940 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:45:57.006278 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:45:57.006494 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:45:57.006792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:45:57.008833 kernel: AES CTR mode by8 optimization enabled Sep 12 17:45:57.010623 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:45:57.024380 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 17:45:57.024589 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 17:45:57.024737 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:45:57.048074 kernel: scsi host0: ahci Sep 12 17:45:57.048313 kernel: scsi host1: ahci Sep 12 17:45:57.048635 kernel: scsi host2: ahci Sep 12 17:45:57.049352 kernel: scsi host3: ahci Sep 12 17:45:57.050215 kernel: scsi host4: ahci Sep 12 17:45:57.049277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:45:57.055252 kernel: scsi host5: ahci Sep 12 17:45:57.055467 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 12 17:45:57.055479 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 12 17:45:57.055490 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 12 17:45:57.055500 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 12 17:45:57.057804 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 12 17:45:57.057902 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 12 17:45:57.079248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:45:57.095673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:45:57.105043 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:45:57.108403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:45:57.112589 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:45:57.114914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:45:57.114999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:45:57.118360 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:45:57.141644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:45:57.144515 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:45:57.154253 disk-uuid[635]: Primary Header is updated. Sep 12 17:45:57.154253 disk-uuid[635]: Secondary Entries is updated. Sep 12 17:45:57.154253 disk-uuid[635]: Secondary Header is updated. Sep 12 17:45:57.158424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:45:57.164878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:45:57.171422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:45:57.367071 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:45:57.367167 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:45:57.367179 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:45:57.368862 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:45:57.368963 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:45:57.370230 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 17:45:57.370258 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:45:57.370849 kernel: ata3.00: applying bridge limits Sep 12 17:45:57.371952 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 17:45:57.371976 kernel: ata3.00: configured for UDMA/100 Sep 12 17:45:57.372857 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:45:57.373865 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:45:57.428859 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:45:57.429274 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:45:57.449848 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:45:57.883122 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:45:57.884920 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:45:57.886542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:45:57.887755 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:45:57.889630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:45:57.918767 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:45:58.184862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:45:58.185406 disk-uuid[639]: The operation has completed successfully. Sep 12 17:45:58.216662 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:45:58.216807 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:45:58.250053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:45:58.273087 sh[669]: Success Sep 12 17:45:58.291859 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:45:58.291948 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:45:58.291961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:45:58.301846 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 17:45:58.334005 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:45:58.338225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:45:58.351006 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:45:58.358660 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (681) Sep 12 17:45:58.358690 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:45:58.358701 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:45:58.364156 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:45:58.364189 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:45:58.365505 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:45:58.367728 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:45:58.369946 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:45:58.372642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:45:58.374706 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:45:58.401839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Sep 12 17:45:58.401904 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:45:58.401916 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:45:58.405881 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:45:58.405917 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:45:58.412887 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:45:58.413292 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:45:58.415212 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:45:58.726648 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:45:58.731672 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:45:58.755632 ignition[757]: Ignition 2.21.0 Sep 12 17:45:58.755651 ignition[757]: Stage: fetch-offline Sep 12 17:45:58.755693 ignition[757]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:45:58.755704 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:45:58.755883 ignition[757]: parsed url from cmdline: "" Sep 12 17:45:58.755887 ignition[757]: no config URL provided Sep 12 17:45:58.755898 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:45:58.755913 ignition[757]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:45:58.755957 ignition[757]: op(1): [started] loading QEMU firmware config module Sep 12 17:45:58.755963 ignition[757]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:45:58.768275 ignition[757]: op(1): [finished] loading QEMU firmware config module Sep 12 17:45:58.781728 systemd-networkd[856]: lo: Link UP Sep 12 17:45:58.781739 systemd-networkd[856]: lo: Gained carrier Sep 12 17:45:58.783517 systemd-networkd[856]: Enumeration completed Sep 12 17:45:58.783608 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:45:58.784004 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:45:58.784009 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:45:58.785065 systemd-networkd[856]: eth0: Link UP Sep 12 17:45:58.785247 systemd-networkd[856]: eth0: Gained carrier Sep 12 17:45:58.785256 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:45:58.786548 systemd[1]: Reached target network.target - Network. Sep 12 17:45:58.807879 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:45:58.820659 ignition[757]: parsing config with SHA512: 4543b0e6b360ef947b8cf93ca62aeb9c1c8dfe8160b84d8b7c502e84f1655a4b7194dd3b604d87e81a6693fb03addc312baff6d16718ff5f1829dfd5b94c5406 Sep 12 17:45:58.824469 unknown[757]: fetched base config from "system" Sep 12 17:45:58.825332 systemd-resolved[253]: Detected conflict on linux IN A 10.0.0.93 Sep 12 17:45:58.825343 systemd-resolved[253]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Sep 12 17:45:58.825909 ignition[757]: fetch-offline: fetch-offline passed Sep 12 17:45:58.825436 unknown[757]: fetched user config from "qemu" Sep 12 17:45:58.826087 ignition[757]: Ignition finished successfully Sep 12 17:45:58.830080 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:45:58.832922 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:45:58.834915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:45:58.872340 ignition[865]: Ignition 2.21.0 Sep 12 17:45:58.872360 ignition[865]: Stage: kargs Sep 12 17:45:58.872523 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:45:58.872544 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:45:58.874173 ignition[865]: kargs: kargs passed Sep 12 17:45:58.874276 ignition[865]: Ignition finished successfully Sep 12 17:45:58.882272 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:45:58.886112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:45:58.930282 ignition[873]: Ignition 2.21.0 Sep 12 17:45:58.930295 ignition[873]: Stage: disks Sep 12 17:45:58.930420 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:45:58.930431 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:45:58.934965 ignition[873]: disks: disks passed Sep 12 17:45:58.935023 ignition[873]: Ignition finished successfully Sep 12 17:45:58.939156 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:45:58.939632 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:45:58.941411 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:45:58.941716 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:45:58.942205 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:45:58.942517 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:45:58.944004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:45:59.048877 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:45:59.057849 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:45:59.062194 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:45:59.180848 kernel: EXT4-fs (vda9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:45:59.181270 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:45:59.183537 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:45:59.187244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:45:59.189897 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:45:59.191916 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:45:59.191975 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:45:59.195972 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:45:59.214660 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:45:59.217375 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:45:59.222621 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Sep 12 17:45:59.222644 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:45:59.222655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:45:59.224848 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:45:59.224869 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:45:59.227332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:45:59.263847 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:45:59.268663 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:45:59.273859 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:45:59.279153 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:45:59.417839 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:45:59.421741 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:45:59.424528 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:45:59.446133 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:45:59.447808 kernel: BTRFS info (device vda6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:45:59.462411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:45:59.486454 ignition[1005]: INFO : Ignition 2.21.0 Sep 12 17:45:59.486454 ignition[1005]: INFO : Stage: mount Sep 12 17:45:59.488281 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:45:59.488281 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:45:59.490677 ignition[1005]: INFO : mount: mount passed Sep 12 17:45:59.491449 ignition[1005]: INFO : Ignition finished successfully Sep 12 17:45:59.495132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:45:59.498205 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:45:59.601325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:45:59.616485 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 12 17:45:59.616519 kernel: BTRFS info (device vda6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:45:59.616542 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:45:59.620841 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:45:59.620869 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:45:59.622487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:45:59.728478 ignition[1034]: INFO : Ignition 2.21.0 Sep 12 17:45:59.729646 ignition[1034]: INFO : Stage: files Sep 12 17:45:59.730438 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:45:59.730438 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:45:59.735194 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:45:59.736573 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:45:59.736573 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:45:59.739983 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:45:59.741390 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:45:59.741390 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:45:59.740787 unknown[1034]: wrote ssh authorized keys file for user: core Sep 12 17:45:59.745206 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:45:59.745206 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:45:59.805159 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:46:00.078387 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:46:00.080921 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:46:00.097982 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:46:00.097982 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:46:00.097982 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:46:00.120830 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:46:00.123609 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:46:00.123609 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:46:00.480067 systemd-networkd[856]: eth0: Gained IPv6LL Sep 12 17:46:00.617648 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:46:01.681487 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:46:01.681487 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:46:01.685793 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:46:01.694383 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:46:01.694383 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:46:01.694383 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:46:01.698772 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:46:01.698772 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:46:01.698772 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:46:01.698772 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:46:01.720040 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:46:01.726756 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:46:01.728460 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:46:01.728460 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:46:01.731194 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:46:01.731194 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:46:01.731194 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:46:01.731194 ignition[1034]: INFO : files: files passed Sep 12 17:46:01.731194 ignition[1034]: INFO : Ignition finished successfully Sep 12 17:46:01.733489 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:46:01.736609 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:46:01.741646 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:46:01.753918 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:46:01.754079 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:46:01.759254 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:46:01.762385 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:46:01.764235 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:46:01.765901 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:46:01.767645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:46:01.770247 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:46:01.773217 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:46:01.821516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:46:01.821667 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:46:01.822404 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:46:01.825503 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:46:01.826095 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:46:01.829171 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:46:01.874636 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:46:01.878646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:46:01.908588 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:46:01.909877 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:46:01.912185 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:46:01.913630 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:46:01.913767 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:46:01.915755 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:46:01.916283 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:46:01.916628 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:46:01.917175 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:46:01.917532 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:46:01.918079 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:46:01.918439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:46:01.918823 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:46:01.919354 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:46:01.919712 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:46:01.920256 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:46:01.920614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:46:01.920732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:46:01.921568 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:46:01.922125 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:46:01.922450 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:46:01.922635 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:46:01.923189 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:46:01.923323 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:46:01.949475 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:46:01.949592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:46:01.950382 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:46:01.950649 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:46:01.958915 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:46:01.961567 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:46:01.962252 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:46:01.962602 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:46:01.962705 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:46:01.965605 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:46:01.965690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:46:01.967455 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:46:01.967565 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:46:01.969402 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:46:01.969506 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:46:01.974557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:46:01.975921 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:46:01.978050 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:46:01.978170 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:46:01.978464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:46:01.978563 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:46:01.989274 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:46:01.994097 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:46:02.016466 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:46:02.046249 ignition[1089]: INFO : Ignition 2.21.0 Sep 12 17:46:02.046249 ignition[1089]: INFO : Stage: umount Sep 12 17:46:02.048498 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:46:02.048498 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:46:02.048498 ignition[1089]: INFO : umount: umount passed Sep 12 17:46:02.048498 ignition[1089]: INFO : Ignition finished successfully Sep 12 17:46:02.049922 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:46:02.050058 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:46:02.050678 systemd[1]: Stopped target network.target - Network. Sep 12 17:46:02.053566 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:46:02.053625 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:46:02.054174 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:46:02.054220 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:46:02.054541 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:46:02.054587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:46:02.055135 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:46:02.055178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:46:02.055612 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:46:02.064783 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:46:02.078842 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:46:02.078987 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:46:02.083392 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:46:02.083723 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:46:02.083923 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:46:02.087573 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:46:02.088963 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:46:02.091015 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:46:02.091068 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:46:02.094051 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:46:02.094436 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:46:02.094488 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:46:02.094832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:46:02.094883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:46:02.099824 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:46:02.099876 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:46:02.100355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:46:02.100404 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:46:02.104532 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:46:02.106079 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:46:02.106140 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:46:02.130688 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:46:02.131979 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:46:02.132750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:46:02.132799 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:46:02.134855 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:46:02.134901 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:46:02.135267 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:46:02.135314 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:46:02.136058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:46:02.136105 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:46:02.136724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:46:02.136769 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:46:02.138310 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:46:02.147322 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:46:02.147376 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:46:02.152043 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:46:02.152097 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:46:02.155448 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:46:02.155510 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:46:02.158949 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:46:02.159001 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:46:02.159668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:46:02.159753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:02.166011 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:46:02.166068 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:46:02.166111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:46:02.166155 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:46:02.166589 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:46:02.171960 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:46:02.181152 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:46:02.181312 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:46:02.224982 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:46:02.225190 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:46:02.227700 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:46:02.228669 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:46:02.228784 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:46:02.232849 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:46:02.253423 systemd[1]: Switching root. Sep 12 17:46:02.298294 systemd-journald[218]: Journal stopped Sep 12 17:46:03.777013 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Sep 12 17:46:03.777079 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:46:03.777093 kernel: SELinux: policy capability open_perms=1 Sep 12 17:46:03.777105 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:46:03.777116 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:46:03.777127 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:46:03.777139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:46:03.777156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:46:03.777171 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:46:03.777183 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:46:03.777195 kernel: audit: type=1403 audit(1757699162.789:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:46:03.777211 systemd[1]: Successfully loaded SELinux policy in 63.745ms. Sep 12 17:46:03.777240 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.086ms. Sep 12 17:46:03.777254 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:46:03.777267 systemd[1]: Detected virtualization kvm. Sep 12 17:46:03.777279 systemd[1]: Detected architecture x86-64. Sep 12 17:46:03.777293 systemd[1]: Detected first boot. Sep 12 17:46:03.777305 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:46:03.777317 zram_generator::config[1134]: No configuration found. Sep 12 17:46:03.777331 kernel: Guest personality initialized and is inactive Sep 12 17:46:03.777343 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 17:46:03.777354 kernel: Initialized host personality Sep 12 17:46:03.777370 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:46:03.777383 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:46:03.777402 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:46:03.777432 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:46:03.777444 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:46:03.777457 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:46:03.777469 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:46:03.777481 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:46:03.777493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:46:03.777506 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:46:03.777518 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:46:03.777533 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:46:03.777546 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:46:03.777563 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:46:03.777576 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:46:03.777588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:46:03.777600 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:46:03.777613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:46:03.777625 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:46:03.777640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:46:03.777659 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:46:03.777672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:46:03.777685 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:46:03.777697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:46:03.777709 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:46:03.777721 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:46:03.777734 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:46:03.777746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:46:03.777761 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:46:03.777774 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:46:03.779019 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:46:03.779052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:46:03.779066 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:46:03.779079 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:46:03.779092 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:46:03.779105 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:46:03.779117 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:46:03.779137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:46:03.779150 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:46:03.779163 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:46:03.779175 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:46:03.779188 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:03.779205 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:46:03.779218 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:46:03.779230 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:46:03.779245 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:46:03.779260 systemd[1]: Reached target machines.target - Containers. Sep 12 17:46:03.779272 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:46:03.779285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:46:03.779297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:46:03.779310 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:46:03.779322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:46:03.779334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:46:03.779346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:46:03.779361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:46:03.779374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:46:03.779387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:46:03.779400 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:46:03.779413 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:46:03.779426 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:46:03.779438 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:46:03.779451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:46:03.779466 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:46:03.779478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:46:03.779491 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:46:03.779506 kernel: loop: module loaded Sep 12 17:46:03.779520 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:46:03.779534 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:46:03.779549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:46:03.779562 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:46:03.779574 systemd[1]: Stopped verity-setup.service. Sep 12 17:46:03.779587 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:03.779599 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:46:03.779614 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:46:03.779627 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:46:03.779641 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:46:03.779663 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:46:03.779676 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:46:03.779688 kernel: fuse: init (API version 7.41) Sep 12 17:46:03.779701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:46:03.779713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:46:03.779726 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:46:03.779740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:46:03.779752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:46:03.779764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:46:03.779777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:46:03.779790 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:46:03.779802 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:46:03.779831 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:46:03.779844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:46:03.779859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:46:03.779871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:46:03.779885 kernel: ACPI: bus type drm_connector registered Sep 12 17:46:03.779943 systemd-journald[1202]: Collecting audit messages is disabled. Sep 12 17:46:03.779969 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:46:03.779983 systemd-journald[1202]: Journal started Sep 12 17:46:03.780014 systemd-journald[1202]: Runtime Journal (/run/log/journal/bfcd35b2196f4afda243306c6ed9c982) is 6M, max 48.2M, 42.2M free. Sep 12 17:46:03.466353 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:46:03.488076 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:46:03.488584 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:46:03.782235 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:46:03.783432 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:46:03.783748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:46:03.785441 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:46:03.788968 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:46:03.802300 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:46:03.805071 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:46:03.807280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:46:03.808503 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:46:03.808534 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:46:03.810483 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:46:03.817993 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:46:03.819273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:46:03.821249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:46:03.823804 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:46:03.824998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:46:03.828055 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:46:03.829363 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:46:03.837034 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:46:03.839458 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:46:03.843942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:46:03.847118 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:46:03.848562 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:46:03.854235 systemd-journald[1202]: Time spent on flushing to /var/log/journal/bfcd35b2196f4afda243306c6ed9c982 is 36.197ms for 1052 entries. Sep 12 17:46:03.854235 systemd-journald[1202]: System Journal (/var/log/journal/bfcd35b2196f4afda243306c6ed9c982) is 8M, max 195.6M, 187.6M free. Sep 12 17:46:03.909943 systemd-journald[1202]: Received client request to flush runtime journal. Sep 12 17:46:03.910013 kernel: loop0: detected capacity change from 0 to 111000 Sep 12 17:46:03.910043 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:46:03.863080 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:46:03.867711 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:46:03.871534 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:46:03.886232 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:46:03.891405 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:46:03.906983 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 12 17:46:03.907002 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 12 17:46:03.913764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:46:03.917823 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:46:03.920854 kernel: loop1: detected capacity change from 0 to 229808 Sep 12 17:46:03.921457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:46:03.922874 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:46:03.948596 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 17:46:03.964752 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:46:03.967497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:46:03.983113 kernel: loop3: detected capacity change from 0 to 111000 Sep 12 17:46:03.997001 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:46:03.997393 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Sep 12 17:46:04.001871 kernel: loop4: detected capacity change from 0 to 229808 Sep 12 17:46:04.002523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:46:04.011869 kernel: loop5: detected capacity change from 0 to 128016 Sep 12 17:46:04.020258 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:46:04.020935 (sd-merge)[1277]: Merged extensions into '/usr'. Sep 12 17:46:04.027852 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:46:04.027867 systemd[1]: Reloading... Sep 12 17:46:04.222842 zram_generator::config[1301]: No configuration found. Sep 12 17:46:04.380501 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:46:04.438933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:46:04.439674 systemd[1]: Reloading finished in 411 ms. Sep 12 17:46:04.462914 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:46:04.464548 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:46:04.490416 systemd[1]: Starting ensure-sysext.service... Sep 12 17:46:04.492491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:46:04.519013 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:46:04.519185 systemd[1]: Reloading... Sep 12 17:46:04.524613 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:46:04.525035 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:46:04.525418 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:46:04.525794 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:46:04.526804 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:46:04.527181 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 12 17:46:04.527313 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 12 17:46:04.531894 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:46:04.532009 systemd-tmpfiles[1343]: Skipping /boot Sep 12 17:46:04.544751 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:46:04.544994 systemd-tmpfiles[1343]: Skipping /boot Sep 12 17:46:04.609836 zram_generator::config[1373]: No configuration found. Sep 12 17:46:04.780985 systemd[1]: Reloading finished in 261 ms. Sep 12 17:46:04.794868 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:46:04.821141 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:46:04.831183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:46:04.833863 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:46:04.836426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:46:04.842924 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:46:04.848147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:46:04.852517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:46:04.858954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:04.859181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:46:04.863171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:46:04.867106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:46:04.870134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:46:04.871339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:46:04.871448 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:46:04.873910 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:46:04.875045 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:04.880249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:46:04.882061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:46:04.882288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:46:04.884508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:46:04.885113 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:46:04.886888 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:46:04.887176 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:46:04.898213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:04.898868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:46:04.902054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:46:04.905347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:46:04.907290 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Sep 12 17:46:04.909028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:46:04.910251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:46:04.910432 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:46:04.912720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:46:04.913741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:05.105788 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:46:05.109919 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:46:05.112063 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:46:05.113999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:46:05.114247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:46:05.115895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:46:05.116132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:46:05.118214 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:46:05.118436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:46:05.124997 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:46:05.131098 augenrules[1456]: No rules Sep 12 17:46:05.126710 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:46:05.126997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:46:05.149778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:46:05.159139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:05.165162 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:46:05.166513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:46:05.170057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:46:05.175033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:46:05.182172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:46:05.184885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:46:05.186160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:46:05.186278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:46:05.190888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:46:05.192234 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:46:05.192333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:46:05.207045 systemd[1]: Finished ensure-sysext.service. Sep 12 17:46:05.213181 augenrules[1489]: /sbin/augenrules: No change Sep 12 17:46:05.213763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:46:05.214033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:46:05.215565 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:46:05.215799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:46:05.225095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:46:05.226485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:46:05.227178 augenrules[1519]: No rules Sep 12 17:46:05.228118 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:46:05.228344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:46:05.230790 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:46:05.231547 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:46:05.244991 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:46:05.248056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:46:05.248936 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:46:05.256099 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:46:05.262941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:46:05.277050 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:46:05.288834 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:46:05.296635 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:46:05.307854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 17:46:05.315838 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:46:05.328721 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:46:05.329062 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:46:05.329226 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:46:05.370705 systemd-resolved[1412]: Positive Trust Anchors: Sep 12 17:46:05.370725 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:46:05.370758 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:46:05.374436 systemd-resolved[1412]: Defaulting to hostname 'linux'. Sep 12 17:46:05.376010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:46:05.377488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:46:05.405635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:46:05.432706 systemd-networkd[1497]: lo: Link UP Sep 12 17:46:05.432717 systemd-networkd[1497]: lo: Gained carrier Sep 12 17:46:05.434487 systemd-networkd[1497]: Enumeration completed Sep 12 17:46:05.434584 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:46:05.553969 systemd[1]: Reached target network.target - Network. Sep 12 17:46:05.573510 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:46:05.573523 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:46:05.574069 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:46:05.575621 systemd-networkd[1497]: eth0: Link UP Sep 12 17:46:05.575852 systemd-networkd[1497]: eth0: Gained carrier Sep 12 17:46:05.575872 systemd-networkd[1497]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:46:05.577172 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:46:05.578563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:46:05.578848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:05.585432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:46:05.590961 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:46:05.592629 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:46:05.596107 kernel: kvm_amd: TSC scaling supported Sep 12 17:46:05.596164 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:46:05.596181 kernel: kvm_amd: Nested Paging enabled Sep 12 17:46:05.597093 kernel: kvm_amd: LBR virtualization supported Sep 12 17:46:05.597121 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:46:05.599764 kernel: kvm_amd: Virtual GIF supported Sep 12 17:46:05.613329 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:46:05.614727 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Sep 12 17:46:06.309673 systemd-timesyncd[1533]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:46:06.309742 systemd-timesyncd[1533]: Initial clock synchronization to Fri 2025-09-12 17:46:06.309588 UTC. Sep 12 17:46:06.309816 systemd-resolved[1412]: Clock change detected. Flushing caches. Sep 12 17:46:06.324777 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:46:06.327980 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:46:06.360842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:46:06.362350 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:46:06.363573 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:46:06.364887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:46:06.366248 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:46:06.367612 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:46:06.368824 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:46:06.370113 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:46:06.371431 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:46:06.371459 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:46:06.372394 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:46:06.374404 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:46:06.377127 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:46:06.380477 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:46:06.381910 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:46:06.383132 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:46:06.391565 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:46:06.393084 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:46:06.395057 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:46:06.396934 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:46:06.397919 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:46:06.398936 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:46:06.398967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:46:06.400045 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:46:06.402149 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:46:06.404224 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:46:06.406304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:46:06.408907 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:46:06.410025 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:46:06.411866 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:46:06.414999 jq[1572]: false Sep 12 17:46:06.415892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:46:06.418049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:46:06.422169 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:46:06.424966 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:46:06.433037 oslogin_cache_refresh[1574]: Refreshing passwd entry cache Sep 12 17:46:06.434741 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing passwd entry cache Sep 12 17:46:06.431841 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:46:06.433794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:46:06.434252 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:46:06.434891 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:46:06.439616 extend-filesystems[1573]: Found /dev/vda6 Sep 12 17:46:06.441298 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:46:06.443752 oslogin_cache_refresh[1574]: Failure getting users, quitting Sep 12 17:46:06.444075 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting users, quitting Sep 12 17:46:06.444075 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:46:06.444075 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Refreshing group entry cache Sep 12 17:46:06.443774 oslogin_cache_refresh[1574]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:46:06.443832 oslogin_cache_refresh[1574]: Refreshing group entry cache Sep 12 17:46:06.446816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:46:06.448468 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:46:06.450435 oslogin_cache_refresh[1574]: Failure getting groups, quitting Sep 12 17:46:06.454385 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Failure getting groups, quitting Sep 12 17:46:06.454385 google_oslogin_nss_cache[1574]: oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:46:06.448779 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:46:06.450448 oslogin_cache_refresh[1574]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:46:06.449116 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:46:06.449385 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:46:06.456947 extend-filesystems[1573]: Found /dev/vda9 Sep 12 17:46:06.451827 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:46:06.460879 jq[1589]: true Sep 12 17:46:06.452077 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:46:06.455276 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:46:06.455539 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:46:06.463851 extend-filesystems[1573]: Checking size of /dev/vda9 Sep 12 17:46:06.466920 update_engine[1588]: I20250912 17:46:06.466838 1588 main.cc:92] Flatcar Update Engine starting Sep 12 17:46:06.468029 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:46:06.481986 extend-filesystems[1573]: Resized partition /dev/vda9 Sep 12 17:46:06.483378 jq[1602]: true Sep 12 17:46:06.493435 extend-filesystems[1612]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:46:06.508753 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:46:06.533359 tar[1594]: linux-amd64/LICENSE Sep 12 17:46:06.533359 tar[1594]: linux-amd64/helm Sep 12 17:46:06.538435 dbus-daemon[1570]: [system] SELinux support is enabled Sep 12 17:46:06.538665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:46:06.545341 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:46:06.545366 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:46:06.546783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:46:06.546799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:46:06.553443 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:46:06.556026 update_engine[1588]: I20250912 17:46:06.555970 1588 update_check_scheduler.cc:74] Next update check in 3m46s Sep 12 17:46:06.558474 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 17:46:06.558502 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:46:06.559827 systemd-logind[1585]: New seat seat0. Sep 12 17:46:06.566308 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:46:06.567856 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:46:06.573753 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:46:06.596391 extend-filesystems[1612]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:46:06.596391 extend-filesystems[1612]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:46:06.596391 extend-filesystems[1612]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:46:06.603007 extend-filesystems[1573]: Resized filesystem in /dev/vda9 Sep 12 17:46:06.602625 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:46:06.604388 sshd_keygen[1596]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:46:06.628905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:46:06.639916 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:46:06.649452 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:46:06.659452 bash[1641]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:46:06.664045 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:46:06.668300 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:46:06.673713 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:46:06.674096 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:46:06.674794 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:46:06.679378 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:46:06.698109 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:46:06.703971 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:46:06.708557 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:46:06.710221 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:46:06.815272 containerd[1597]: time="2025-09-12T17:46:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:46:06.816438 containerd[1597]: time="2025-09-12T17:46:06.816388337Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:46:06.830053 containerd[1597]: time="2025-09-12T17:46:06.829991613Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.493µs" Sep 12 17:46:06.830053 containerd[1597]: time="2025-09-12T17:46:06.830042438Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:46:06.830115 containerd[1597]: time="2025-09-12T17:46:06.830068828Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:46:06.830360 containerd[1597]: time="2025-09-12T17:46:06.830328695Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:46:06.830360 containerd[1597]: time="2025-09-12T17:46:06.830351768Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:46:06.830419 containerd[1597]: time="2025-09-12T17:46:06.830382426Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830500 containerd[1597]: time="2025-09-12T17:46:06.830459991Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830500 containerd[1597]: time="2025-09-12T17:46:06.830487864Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830881 containerd[1597]: time="2025-09-12T17:46:06.830844833Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830881 containerd[1597]: time="2025-09-12T17:46:06.830864490Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830881 containerd[1597]: time="2025-09-12T17:46:06.830875510Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:46:06.830881 containerd[1597]: time="2025-09-12T17:46:06.830883916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:46:06.831054 containerd[1597]: time="2025-09-12T17:46:06.831031874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:46:06.831329 containerd[1597]: time="2025-09-12T17:46:06.831297902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:46:06.831354 containerd[1597]: time="2025-09-12T17:46:06.831334741Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:46:06.831354 containerd[1597]: time="2025-09-12T17:46:06.831345001Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:46:06.831412 containerd[1597]: time="2025-09-12T17:46:06.831388993Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:46:06.831658 containerd[1597]: time="2025-09-12T17:46:06.831628162Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:46:06.831736 containerd[1597]: time="2025-09-12T17:46:06.831708091Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:46:07.092756 containerd[1597]: time="2025-09-12T17:46:07.092648184Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:46:07.092862 containerd[1597]: time="2025-09-12T17:46:07.092833682Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:46:07.092888 containerd[1597]: time="2025-09-12T17:46:07.092878676Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:46:07.092909 containerd[1597]: time="2025-09-12T17:46:07.092896800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:46:07.092943 containerd[1597]: time="2025-09-12T17:46:07.092918842Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:46:07.092943 containerd[1597]: time="2025-09-12T17:46:07.092938579Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:46:07.092981 containerd[1597]: time="2025-09-12T17:46:07.092968585Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:46:07.093001 containerd[1597]: time="2025-09-12T17:46:07.092989304Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:46:07.093021 containerd[1597]: time="2025-09-12T17:46:07.093008910Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:46:07.093041 containerd[1597]: time="2025-09-12T17:46:07.093030401Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:46:07.093061 containerd[1597]: time="2025-09-12T17:46:07.093043906Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:46:07.093087 containerd[1597]: time="2025-09-12T17:46:07.093068121Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:46:07.093419 containerd[1597]: time="2025-09-12T17:46:07.093379746Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:46:07.093450 containerd[1597]: time="2025-09-12T17:46:07.093433096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:46:07.093486 containerd[1597]: time="2025-09-12T17:46:07.093464184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:46:07.093517 containerd[1597]: time="2025-09-12T17:46:07.093486045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:46:07.093539 containerd[1597]: time="2025-09-12T17:46:07.093516542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:46:07.093569 containerd[1597]: time="2025-09-12T17:46:07.093537171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:46:07.093569 containerd[1597]: time="2025-09-12T17:46:07.093559413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:46:07.093623 containerd[1597]: time="2025-09-12T17:46:07.093583778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:46:07.093623 containerd[1597]: time="2025-09-12T17:46:07.093600930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:46:07.093880 containerd[1597]: time="2025-09-12T17:46:07.093857401Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:46:07.093921 containerd[1597]: time="2025-09-12T17:46:07.093905431Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:46:07.094119 containerd[1597]: time="2025-09-12T17:46:07.094083666Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:46:07.094143 containerd[1597]: time="2025-09-12T17:46:07.094120054Z" level=info msg="Start snapshots syncer" Sep 12 17:46:07.094759 containerd[1597]: time="2025-09-12T17:46:07.094259455Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:46:07.094974 containerd[1597]: time="2025-09-12T17:46:07.094925895Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:46:07.095222 containerd[1597]: time="2025-09-12T17:46:07.095008319Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:46:07.097169 containerd[1597]: time="2025-09-12T17:46:07.097133795Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:46:07.097325 containerd[1597]: time="2025-09-12T17:46:07.097295408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:46:07.097350 containerd[1597]: time="2025-09-12T17:46:07.097328349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:46:07.097350 containerd[1597]: time="2025-09-12T17:46:07.097340412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:46:07.097388 containerd[1597]: time="2025-09-12T17:46:07.097358135Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:46:07.097421 containerd[1597]: time="2025-09-12T17:46:07.097396016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:46:07.097421 containerd[1597]: time="2025-09-12T17:46:07.097409401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:46:07.097460 containerd[1597]: time="2025-09-12T17:46:07.097425041Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:46:07.097511 containerd[1597]: time="2025-09-12T17:46:07.097484252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:46:07.097539 containerd[1597]: time="2025-09-12T17:46:07.097517554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:46:07.097565 containerd[1597]: time="2025-09-12T17:46:07.097536700Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:46:07.097614 containerd[1597]: time="2025-09-12T17:46:07.097592735Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:46:07.097643 containerd[1597]: time="2025-09-12T17:46:07.097626849Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:46:07.097666 containerd[1597]: time="2025-09-12T17:46:07.097636858Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:46:07.097666 containerd[1597]: time="2025-09-12T17:46:07.097657476Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:46:07.097666 containerd[1597]: time="2025-09-12T17:46:07.097665842Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:46:07.097757 containerd[1597]: time="2025-09-12T17:46:07.097676462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:46:07.097757 containerd[1597]: time="2025-09-12T17:46:07.097695408Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:46:07.097807 containerd[1597]: time="2025-09-12T17:46:07.097762984Z" level=info msg="runtime interface created" Sep 12 17:46:07.097807 containerd[1597]: time="2025-09-12T17:46:07.097770559Z" level=info msg="created NRI interface" Sep 12 17:46:07.097807 containerd[1597]: time="2025-09-12T17:46:07.097779435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:46:07.097807 containerd[1597]: time="2025-09-12T17:46:07.097796126Z" level=info msg="Connect containerd service" Sep 12 17:46:07.097884 containerd[1597]: time="2025-09-12T17:46:07.097821634Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:46:07.100594 containerd[1597]: time="2025-09-12T17:46:07.099990220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:46:07.249864 tar[1594]: linux-amd64/README.md Sep 12 17:46:07.275269 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:46:07.282372 containerd[1597]: time="2025-09-12T17:46:07.282301354Z" level=info msg="Start subscribing containerd event" Sep 12 17:46:07.282464 containerd[1597]: time="2025-09-12T17:46:07.282400279Z" level=info msg="Start recovering state" Sep 12 17:46:07.282612 containerd[1597]: time="2025-09-12T17:46:07.282596838Z" level=info msg="Start event monitor" Sep 12 17:46:07.282638 containerd[1597]: time="2025-09-12T17:46:07.282627355Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:46:07.282659 containerd[1597]: time="2025-09-12T17:46:07.282649807Z" level=info msg="Start streaming server" Sep 12 17:46:07.282678 containerd[1597]: time="2025-09-12T17:46:07.282661128Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:46:07.282678 containerd[1597]: time="2025-09-12T17:46:07.282674784Z" level=info msg="runtime interface starting up..." Sep 12 17:46:07.282715 containerd[1597]: time="2025-09-12T17:46:07.282682989Z" level=info msg="starting plugins..." Sep 12 17:46:07.282715 containerd[1597]: time="2025-09-12T17:46:07.282712064Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:46:07.282804 containerd[1597]: time="2025-09-12T17:46:07.282595976Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:46:07.282959 containerd[1597]: time="2025-09-12T17:46:07.282934641Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:46:07.283124 containerd[1597]: time="2025-09-12T17:46:07.283076748Z" level=info msg="containerd successfully booted in 0.468407s" Sep 12 17:46:07.283160 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:46:08.342026 systemd-networkd[1497]: eth0: Gained IPv6LL Sep 12 17:46:08.345783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:46:08.347823 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:46:08.350618 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:46:08.353544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:08.380675 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:46:08.406304 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:46:08.421816 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:46:08.422197 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:46:08.423988 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:46:09.865973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:09.867899 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:46:09.869845 systemd[1]: Startup finished in 3.566s (kernel) + 7.136s (initrd) + 6.448s (userspace) = 17.152s. Sep 12 17:46:09.871842 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:46:10.332636 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:46:10.333953 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:46872.service - OpenSSH per-connection server daemon (10.0.0.1:46872). Sep 12 17:46:10.409945 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 46872 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:10.411889 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:10.420164 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:46:10.421484 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:46:10.429157 systemd-logind[1585]: New session 1 of user core. Sep 12 17:46:10.444182 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:46:10.448172 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:46:10.466773 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:46:10.469367 systemd-logind[1585]: New session c1 of user core. Sep 12 17:46:10.584547 kubelet[1704]: E0912 17:46:10.584390 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:46:10.589067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:46:10.589291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:46:10.589703 systemd[1]: kubelet.service: Consumed 2.018s CPU time, 267.8M memory peak. Sep 12 17:46:10.632435 systemd[1721]: Queued start job for default target default.target. Sep 12 17:46:10.644011 systemd[1721]: Created slice app.slice - User Application Slice. Sep 12 17:46:10.644041 systemd[1721]: Reached target paths.target - Paths. Sep 12 17:46:10.644096 systemd[1721]: Reached target timers.target - Timers. Sep 12 17:46:10.645669 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:46:10.659322 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:46:10.659457 systemd[1721]: Reached target sockets.target - Sockets. Sep 12 17:46:10.659500 systemd[1721]: Reached target basic.target - Basic System. Sep 12 17:46:10.659540 systemd[1721]: Reached target default.target - Main User Target. Sep 12 17:46:10.659572 systemd[1721]: Startup finished in 181ms. Sep 12 17:46:10.659978 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:46:10.661522 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:46:10.721552 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:46876.service - OpenSSH per-connection server daemon (10.0.0.1:46876). Sep 12 17:46:10.776007 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 46876 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:10.777595 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:10.782379 systemd-logind[1585]: New session 2 of user core. Sep 12 17:46:10.795876 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:46:10.848386 sshd[1737]: Connection closed by 10.0.0.1 port 46876 Sep 12 17:46:10.848611 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:10.864494 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:46876.service: Deactivated successfully. Sep 12 17:46:10.866176 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:46:10.866857 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:46:10.869463 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:46880.service - OpenSSH per-connection server daemon (10.0.0.1:46880). Sep 12 17:46:10.870031 systemd-logind[1585]: Removed session 2. Sep 12 17:46:10.930518 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 46880 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:10.931775 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:10.935761 systemd-logind[1585]: New session 3 of user core. Sep 12 17:46:10.945858 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:46:10.996205 sshd[1746]: Connection closed by 10.0.0.1 port 46880 Sep 12 17:46:10.996482 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:11.009380 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:46880.service: Deactivated successfully. Sep 12 17:46:11.011096 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:46:11.011808 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:46:11.014238 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:46888.service - OpenSSH per-connection server daemon (10.0.0.1:46888). Sep 12 17:46:11.014798 systemd-logind[1585]: Removed session 3. Sep 12 17:46:11.064280 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 46888 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:11.065553 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:11.069450 systemd-logind[1585]: New session 4 of user core. Sep 12 17:46:11.082848 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:46:11.135820 sshd[1755]: Connection closed by 10.0.0.1 port 46888 Sep 12 17:46:11.136208 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:11.144190 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:46888.service: Deactivated successfully. Sep 12 17:46:11.145938 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:46:11.146642 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:46:11.149183 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:46894.service - OpenSSH per-connection server daemon (10.0.0.1:46894). Sep 12 17:46:11.149706 systemd-logind[1585]: Removed session 4. Sep 12 17:46:11.215621 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 46894 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:11.216940 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:11.221196 systemd-logind[1585]: New session 5 of user core. Sep 12 17:46:11.230849 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:46:11.291128 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:46:11.291508 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:46:11.306521 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 12 17:46:11.308622 sshd[1764]: Connection closed by 10.0.0.1 port 46894 Sep 12 17:46:11.309003 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:11.322721 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:46894.service: Deactivated successfully. Sep 12 17:46:11.324843 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:46:11.325607 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:46:11.328663 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:46908.service - OpenSSH per-connection server daemon (10.0.0.1:46908). Sep 12 17:46:11.329329 systemd-logind[1585]: Removed session 5. Sep 12 17:46:11.387265 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 46908 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:11.389181 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:11.393784 systemd-logind[1585]: New session 6 of user core. Sep 12 17:46:11.410012 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:46:11.463836 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:46:11.464155 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:46:11.471799 sudo[1776]: pam_unix(sudo:session): session closed for user root Sep 12 17:46:11.478315 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:46:11.478644 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:46:11.489027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:46:11.540055 augenrules[1798]: No rules Sep 12 17:46:11.541825 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:46:11.542112 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:46:11.543394 sudo[1775]: pam_unix(sudo:session): session closed for user root Sep 12 17:46:11.544928 sshd[1774]: Connection closed by 10.0.0.1 port 46908 Sep 12 17:46:11.545265 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:11.561592 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:46908.service: Deactivated successfully. Sep 12 17:46:11.563405 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:46:11.564169 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:46:11.566814 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:46914.service - OpenSSH per-connection server daemon (10.0.0.1:46914). Sep 12 17:46:11.567338 systemd-logind[1585]: Removed session 6. Sep 12 17:46:11.622626 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 46914 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:46:11.624254 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:46:11.628941 systemd-logind[1585]: New session 7 of user core. Sep 12 17:46:11.638856 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:46:11.693069 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:46:11.693449 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:46:12.574208 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:46:12.597083 (dockerd)[1832]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:46:13.112962 dockerd[1832]: time="2025-09-12T17:46:13.112877152Z" level=info msg="Starting up" Sep 12 17:46:13.114131 dockerd[1832]: time="2025-09-12T17:46:13.114076991Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:46:13.138216 dockerd[1832]: time="2025-09-12T17:46:13.138147504Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:46:13.661661 dockerd[1832]: time="2025-09-12T17:46:13.661606088Z" level=info msg="Loading containers: start." Sep 12 17:46:13.672818 kernel: Initializing XFRM netlink socket Sep 12 17:46:13.944413 systemd-networkd[1497]: docker0: Link UP Sep 12 17:46:13.949418 dockerd[1832]: time="2025-09-12T17:46:13.949372480Z" level=info msg="Loading containers: done." Sep 12 17:46:13.969415 dockerd[1832]: time="2025-09-12T17:46:13.969340881Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:46:13.969611 dockerd[1832]: time="2025-09-12T17:46:13.969461257Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:46:13.969611 dockerd[1832]: time="2025-09-12T17:46:13.969592693Z" level=info msg="Initializing buildkit" Sep 12 17:46:14.000512 dockerd[1832]: time="2025-09-12T17:46:14.000442006Z" level=info msg="Completed buildkit initialization" Sep 12 17:46:14.006308 dockerd[1832]: time="2025-09-12T17:46:14.006253632Z" level=info msg="Daemon has completed initialization" Sep 12 17:46:14.006831 dockerd[1832]: time="2025-09-12T17:46:14.006363268Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:46:14.006528 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:46:15.058285 containerd[1597]: time="2025-09-12T17:46:15.058173372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:46:15.624544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2163594487.mount: Deactivated successfully. Sep 12 17:46:17.134204 containerd[1597]: time="2025-09-12T17:46:17.134119350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:17.134870 containerd[1597]: time="2025-09-12T17:46:17.134786090Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:46:17.135999 containerd[1597]: time="2025-09-12T17:46:17.135960903Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:17.139387 containerd[1597]: time="2025-09-12T17:46:17.139346992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:17.140363 containerd[1597]: time="2025-09-12T17:46:17.140303465Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.082004057s" Sep 12 17:46:17.140408 containerd[1597]: time="2025-09-12T17:46:17.140373516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:46:17.141293 containerd[1597]: time="2025-09-12T17:46:17.141237146Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:46:18.650458 containerd[1597]: time="2025-09-12T17:46:18.650382803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:18.651369 containerd[1597]: time="2025-09-12T17:46:18.651336121Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:46:18.652312 containerd[1597]: time="2025-09-12T17:46:18.652276975Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:18.654689 containerd[1597]: time="2025-09-12T17:46:18.654624647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:18.656826 containerd[1597]: time="2025-09-12T17:46:18.656335895Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.51503992s" Sep 12 17:46:18.656826 containerd[1597]: time="2025-09-12T17:46:18.656376001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:46:18.657335 containerd[1597]: time="2025-09-12T17:46:18.657090540Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:46:20.247874 containerd[1597]: time="2025-09-12T17:46:20.247791164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:20.248630 containerd[1597]: time="2025-09-12T17:46:20.248552231Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:46:20.249856 containerd[1597]: time="2025-09-12T17:46:20.249801643Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:20.252443 containerd[1597]: time="2025-09-12T17:46:20.252396268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:20.253403 containerd[1597]: time="2025-09-12T17:46:20.253368551Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.596232285s" Sep 12 17:46:20.253449 containerd[1597]: time="2025-09-12T17:46:20.253402806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:46:20.254022 containerd[1597]: time="2025-09-12T17:46:20.253981140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:46:20.632224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:46:20.634321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:21.039101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:21.043620 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:46:21.773623 kubelet[2126]: E0912 17:46:21.773558 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:46:21.783703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:46:21.784172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:46:21.784831 systemd[1]: kubelet.service: Consumed 535ms CPU time, 110.9M memory peak. Sep 12 17:46:22.098911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434417730.mount: Deactivated successfully. Sep 12 17:46:22.381906 containerd[1597]: time="2025-09-12T17:46:22.381840556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.382519 containerd[1597]: time="2025-09-12T17:46:22.382482379Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:46:22.383580 containerd[1597]: time="2025-09-12T17:46:22.383548238Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.385426 containerd[1597]: time="2025-09-12T17:46:22.385378349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:22.385818 containerd[1597]: time="2025-09-12T17:46:22.385772358Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.131762915s" Sep 12 17:46:22.385856 containerd[1597]: time="2025-09-12T17:46:22.385815489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:46:22.386347 containerd[1597]: time="2025-09-12T17:46:22.386324193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:46:22.945935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354176040.mount: Deactivated successfully. Sep 12 17:46:23.989744 containerd[1597]: time="2025-09-12T17:46:23.989680726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:23.990435 containerd[1597]: time="2025-09-12T17:46:23.990394193Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:46:23.991748 containerd[1597]: time="2025-09-12T17:46:23.991695994Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:23.994204 containerd[1597]: time="2025-09-12T17:46:23.994169823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:23.995098 containerd[1597]: time="2025-09-12T17:46:23.995038201Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.608687058s" Sep 12 17:46:23.995098 containerd[1597]: time="2025-09-12T17:46:23.995095639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:46:23.995608 containerd[1597]: time="2025-09-12T17:46:23.995564197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:46:24.542245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129092169.mount: Deactivated successfully. Sep 12 17:46:24.549886 containerd[1597]: time="2025-09-12T17:46:24.549841958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:46:24.550605 containerd[1597]: time="2025-09-12T17:46:24.550547841Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:46:24.551687 containerd[1597]: time="2025-09-12T17:46:24.551650729Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:46:24.553686 containerd[1597]: time="2025-09-12T17:46:24.553659926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:46:24.554131 containerd[1597]: time="2025-09-12T17:46:24.554105502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 558.49624ms" Sep 12 17:46:24.554175 containerd[1597]: time="2025-09-12T17:46:24.554134216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:46:24.554622 containerd[1597]: time="2025-09-12T17:46:24.554599709Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:46:25.120896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192064134.mount: Deactivated successfully. Sep 12 17:46:27.298226 containerd[1597]: time="2025-09-12T17:46:27.298146428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:27.298866 containerd[1597]: time="2025-09-12T17:46:27.298817316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:46:27.300036 containerd[1597]: time="2025-09-12T17:46:27.299993571Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:27.302558 containerd[1597]: time="2025-09-12T17:46:27.302503347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:27.303708 containerd[1597]: time="2025-09-12T17:46:27.303637754Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.749010684s" Sep 12 17:46:27.303708 containerd[1597]: time="2025-09-12T17:46:27.303691465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:46:30.364887 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:30.365081 systemd[1]: kubelet.service: Consumed 535ms CPU time, 110.9M memory peak. Sep 12 17:46:30.368374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:30.402230 systemd[1]: Reload requested from client PID 2286 ('systemctl') (unit session-7.scope)... Sep 12 17:46:30.402302 systemd[1]: Reloading... Sep 12 17:46:30.507775 zram_generator::config[2332]: No configuration found. Sep 12 17:46:30.979638 systemd[1]: Reloading finished in 576 ms. Sep 12 17:46:31.074960 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:46:31.075118 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:46:31.075555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:31.075623 systemd[1]: kubelet.service: Consumed 219ms CPU time, 98.2M memory peak. Sep 12 17:46:31.077940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:31.278421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:31.296258 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:46:31.342754 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:46:31.342754 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:46:31.342754 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:46:31.342754 kubelet[2377]: I0912 17:46:31.342022 2377 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:46:31.537058 kubelet[2377]: I0912 17:46:31.536901 2377 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:46:31.537058 kubelet[2377]: I0912 17:46:31.536945 2377 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:46:31.537260 kubelet[2377]: I0912 17:46:31.537231 2377 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:46:31.558088 kubelet[2377]: E0912 17:46:31.558035 2377 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:46:31.558547 kubelet[2377]: I0912 17:46:31.558524 2377 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:46:31.569196 kubelet[2377]: I0912 17:46:31.569150 2377 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:46:31.575292 kubelet[2377]: I0912 17:46:31.575255 2377 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:46:31.575580 kubelet[2377]: I0912 17:46:31.575533 2377 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:46:31.575809 kubelet[2377]: I0912 17:46:31.575565 2377 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:46:31.575987 kubelet[2377]: I0912 17:46:31.575819 2377 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:46:31.575987 kubelet[2377]: I0912 17:46:31.575830 2377 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:46:31.576827 kubelet[2377]: I0912 17:46:31.576787 2377 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:46:31.578912 kubelet[2377]: I0912 17:46:31.578870 2377 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:46:31.578912 kubelet[2377]: I0912 17:46:31.578911 2377 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:46:31.580239 kubelet[2377]: I0912 17:46:31.580213 2377 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:46:31.582363 kubelet[2377]: I0912 17:46:31.582108 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:46:31.585611 kubelet[2377]: I0912 17:46:31.585568 2377 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:46:31.586271 kubelet[2377]: I0912 17:46:31.586244 2377 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:46:31.586421 kubelet[2377]: E0912 17:46:31.586240 2377 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:46:31.586928 kubelet[2377]: E0912 17:46:31.586883 2377 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:46:31.587019 kubelet[2377]: W0912 17:46:31.586998 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:46:31.590210 kubelet[2377]: I0912 17:46:31.590184 2377 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:46:31.590265 kubelet[2377]: I0912 17:46:31.590251 2377 server.go:1289] "Started kubelet" Sep 12 17:46:31.590429 kubelet[2377]: I0912 17:46:31.590368 2377 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:46:31.591970 kubelet[2377]: I0912 17:46:31.591953 2377 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:46:31.595550 kubelet[2377]: I0912 17:46:31.595470 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:46:31.596401 kubelet[2377]: I0912 17:46:31.596221 2377 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:46:31.597195 kubelet[2377]: E0912 17:46:31.597163 2377 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:46:31.598223 kubelet[2377]: I0912 17:46:31.598204 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:46:31.598395 kubelet[2377]: E0912 17:46:31.597198 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18649a1a20b9bd32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:46:31.59020677 +0000 UTC m=+0.287266255,LastTimestamp:2025-09-12 17:46:31.59020677 +0000 UTC m=+0.287266255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:46:31.598618 kubelet[2377]: I0912 17:46:31.598589 2377 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:46:31.599275 kubelet[2377]: I0912 17:46:31.599258 2377 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:46:31.599471 kubelet[2377]: E0912 17:46:31.599453 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:31.600116 kubelet[2377]: I0912 17:46:31.600100 2377 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:46:31.600228 kubelet[2377]: I0912 17:46:31.600198 2377 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:46:31.600368 kubelet[2377]: I0912 17:46:31.600321 2377 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:46:31.600518 kubelet[2377]: I0912 17:46:31.600400 2377 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:46:31.601206 kubelet[2377]: E0912 17:46:31.601165 2377 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:46:31.601370 kubelet[2377]: E0912 17:46:31.601301 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Sep 12 17:46:31.603268 kubelet[2377]: I0912 17:46:31.603238 2377 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:46:31.617239 kubelet[2377]: I0912 17:46:31.617193 2377 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:46:31.617239 kubelet[2377]: I0912 17:46:31.617223 2377 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:46:31.617239 kubelet[2377]: I0912 17:46:31.617247 2377 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:46:31.700661 kubelet[2377]: E0912 17:46:31.700585 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:31.801763 kubelet[2377]: E0912 17:46:31.800679 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:31.802232 kubelet[2377]: E0912 17:46:31.802195 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Sep 12 17:46:31.901438 kubelet[2377]: E0912 17:46:31.901367 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:32.001895 kubelet[2377]: E0912 17:46:32.001810 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:32.067099 kubelet[2377]: I0912 17:46:32.066998 2377 policy_none.go:49] "None policy: Start" Sep 12 17:46:32.067099 kubelet[2377]: I0912 17:46:32.067050 2377 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:46:32.067099 kubelet[2377]: I0912 17:46:32.067074 2377 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:46:32.072755 kubelet[2377]: I0912 17:46:32.072681 2377 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:46:32.074394 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:46:32.075547 kubelet[2377]: I0912 17:46:32.075518 2377 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:46:32.075608 kubelet[2377]: I0912 17:46:32.075557 2377 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:46:32.075608 kubelet[2377]: I0912 17:46:32.075581 2377 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:46:32.075608 kubelet[2377]: I0912 17:46:32.075591 2377 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:46:32.075686 kubelet[2377]: E0912 17:46:32.075636 2377 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:46:32.076417 kubelet[2377]: E0912 17:46:32.076391 2377 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:46:32.083159 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:46:32.087147 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:46:32.102058 kubelet[2377]: E0912 17:46:32.101969 2377 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:32.106038 kubelet[2377]: E0912 17:46:32.105941 2377 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:46:32.106350 kubelet[2377]: I0912 17:46:32.106227 2377 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:46:32.106350 kubelet[2377]: I0912 17:46:32.106246 2377 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:46:32.106545 kubelet[2377]: I0912 17:46:32.106518 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:46:32.107406 kubelet[2377]: E0912 17:46:32.107376 2377 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:46:32.107473 kubelet[2377]: E0912 17:46:32.107434 2377 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:46:32.189231 systemd[1]: Created slice kubepods-burstable-pod28737af9f28b9523d51c0aa905ce175f.slice - libcontainer container kubepods-burstable-pod28737af9f28b9523d51c0aa905ce175f.slice. Sep 12 17:46:32.202278 kubelet[2377]: I0912 17:46:32.202221 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:32.202278 kubelet[2377]: I0912 17:46:32.202266 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:32.202422 kubelet[2377]: I0912 17:46:32.202288 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:32.202422 kubelet[2377]: I0912 17:46:32.202342 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:32.202422 kubelet[2377]: I0912 17:46:32.202374 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:32.202422 kubelet[2377]: I0912 17:46:32.202404 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:32.202422 kubelet[2377]: I0912 17:46:32.202422 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:32.202580 kubelet[2377]: I0912 17:46:32.202441 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:32.202580 kubelet[2377]: I0912 17:46:32.202459 2377 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:32.202887 kubelet[2377]: E0912 17:46:32.202830 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Sep 12 17:46:32.205950 kubelet[2377]: E0912 17:46:32.205927 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:32.208594 kubelet[2377]: I0912 17:46:32.208566 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:46:32.209071 kubelet[2377]: E0912 17:46:32.209027 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 12 17:46:32.209834 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:46:32.211986 kubelet[2377]: E0912 17:46:32.211946 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:32.214712 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:46:32.216558 kubelet[2377]: E0912 17:46:32.216524 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:32.410671 kubelet[2377]: I0912 17:46:32.410624 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:46:32.411172 kubelet[2377]: E0912 17:46:32.411045 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 12 17:46:32.506674 kubelet[2377]: E0912 17:46:32.506638 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.507323 containerd[1597]: time="2025-09-12T17:46:32.507270404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28737af9f28b9523d51c0aa905ce175f,Namespace:kube-system,Attempt:0,}" Sep 12 17:46:32.512507 kubelet[2377]: E0912 17:46:32.512472 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.512886 containerd[1597]: time="2025-09-12T17:46:32.512833946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:46:32.517066 kubelet[2377]: E0912 17:46:32.517043 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.517317 containerd[1597]: time="2025-09-12T17:46:32.517285492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:46:32.693280 kubelet[2377]: E0912 17:46:32.693184 2377 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:46:32.729396 containerd[1597]: time="2025-09-12T17:46:32.729330126Z" level=info msg="connecting to shim c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7" address="unix:///run/containerd/s/ca77025f7f86b05483736602ecdaaa0ecc2bcde2673d6c57d4eb42146e092468" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:32.730304 containerd[1597]: time="2025-09-12T17:46:32.730266702Z" level=info msg="connecting to shim 68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627" address="unix:///run/containerd/s/47e84bd2d9c4333e0b7fc1296becc76861130d8c4e14c2b9478fde1d35426a44" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:32.735282 containerd[1597]: time="2025-09-12T17:46:32.734387368Z" level=info msg="connecting to shim c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a" address="unix:///run/containerd/s/c915fb228d2e19eb0780068ba097211b05a5ced6553c5255afdec3b98a70738e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:32.757895 systemd[1]: Started cri-containerd-c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7.scope - libcontainer container c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7. Sep 12 17:46:32.762094 systemd[1]: Started cri-containerd-68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627.scope - libcontainer container 68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627. Sep 12 17:46:32.766848 systemd[1]: Started cri-containerd-c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a.scope - libcontainer container c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a. Sep 12 17:46:32.813291 kubelet[2377]: I0912 17:46:32.813250 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:46:32.813633 kubelet[2377]: E0912 17:46:32.813610 2377 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 12 17:46:32.814511 containerd[1597]: time="2025-09-12T17:46:32.814466416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:28737af9f28b9523d51c0aa905ce175f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7\"" Sep 12 17:46:32.815740 kubelet[2377]: E0912 17:46:32.815695 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.822431 containerd[1597]: time="2025-09-12T17:46:32.821811709Z" level=info msg="CreateContainer within sandbox \"c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:46:32.822629 containerd[1597]: time="2025-09-12T17:46:32.822594416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627\"" Sep 12 17:46:32.823190 kubelet[2377]: E0912 17:46:32.823154 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.827002 containerd[1597]: time="2025-09-12T17:46:32.826963348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a\"" Sep 12 17:46:32.827215 containerd[1597]: time="2025-09-12T17:46:32.827195824Z" level=info msg="CreateContainer within sandbox \"68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:46:32.828607 kubelet[2377]: E0912 17:46:32.828580 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:32.836290 containerd[1597]: time="2025-09-12T17:46:32.836254980Z" level=info msg="Container 47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:32.837366 containerd[1597]: time="2025-09-12T17:46:32.837334454Z" level=info msg="CreateContainer within sandbox \"c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:46:32.846559 containerd[1597]: time="2025-09-12T17:46:32.846523053Z" level=info msg="Container 3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:32.850035 containerd[1597]: time="2025-09-12T17:46:32.850006434Z" level=info msg="CreateContainer within sandbox \"68015fab745269198a69c72bc6c82709c1e30c0100c61627786e8c19ca074627\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1\"" Sep 12 17:46:32.850740 containerd[1597]: time="2025-09-12T17:46:32.850687641Z" level=info msg="StartContainer for \"47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1\"" Sep 12 17:46:32.851894 containerd[1597]: time="2025-09-12T17:46:32.851852285Z" level=info msg="connecting to shim 47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1" address="unix:///run/containerd/s/47e84bd2d9c4333e0b7fc1296becc76861130d8c4e14c2b9478fde1d35426a44" protocol=ttrpc version=3 Sep 12 17:46:32.853566 containerd[1597]: time="2025-09-12T17:46:32.853540150Z" level=info msg="Container 6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:32.858773 containerd[1597]: time="2025-09-12T17:46:32.858722296Z" level=info msg="CreateContainer within sandbox \"c6df4ea39709bff6da256a3f964ec4614e5db0615e0522602276177fed494db7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5\"" Sep 12 17:46:32.859269 containerd[1597]: time="2025-09-12T17:46:32.859096247Z" level=info msg="StartContainer for \"3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5\"" Sep 12 17:46:32.860364 containerd[1597]: time="2025-09-12T17:46:32.860329820Z" level=info msg="connecting to shim 3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5" address="unix:///run/containerd/s/ca77025f7f86b05483736602ecdaaa0ecc2bcde2673d6c57d4eb42146e092468" protocol=ttrpc version=3 Sep 12 17:46:32.863749 containerd[1597]: time="2025-09-12T17:46:32.863412661Z" level=info msg="CreateContainer within sandbox \"c965ee3e12e4f292aa8b5506e700a6545c2716d98b212501265c2ab0227c960a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b\"" Sep 12 17:46:32.863884 containerd[1597]: time="2025-09-12T17:46:32.863842967Z" level=info msg="StartContainer for \"6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b\"" Sep 12 17:46:32.864823 containerd[1597]: time="2025-09-12T17:46:32.864782780Z" level=info msg="connecting to shim 6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b" address="unix:///run/containerd/s/c915fb228d2e19eb0780068ba097211b05a5ced6553c5255afdec3b98a70738e" protocol=ttrpc version=3 Sep 12 17:46:32.871886 systemd[1]: Started cri-containerd-47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1.scope - libcontainer container 47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1. Sep 12 17:46:32.888862 systemd[1]: Started cri-containerd-3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5.scope - libcontainer container 3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5. Sep 12 17:46:32.890018 systemd[1]: Started cri-containerd-6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b.scope - libcontainer container 6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b. Sep 12 17:46:32.943057 containerd[1597]: time="2025-09-12T17:46:32.942997332Z" level=info msg="StartContainer for \"47ea07c574a3586316603321f5ab643ff75af7e74f1557ce9d83e3a3453e3db1\" returns successfully" Sep 12 17:46:32.946307 containerd[1597]: time="2025-09-12T17:46:32.945703155Z" level=info msg="StartContainer for \"6a9767a7fd1447f474d04ce650304dd38d16ba9996d017471e4b76cace97b76b\" returns successfully" Sep 12 17:46:32.957928 containerd[1597]: time="2025-09-12T17:46:32.957826967Z" level=info msg="StartContainer for \"3f2f3567c2eb0af2b4e0120bfadfd2ce2aaf82ad666fe55883e2c17f157704f5\" returns successfully" Sep 12 17:46:33.003398 kubelet[2377]: E0912 17:46:33.003344 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Sep 12 17:46:33.083429 kubelet[2377]: E0912 17:46:33.083388 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:33.083608 kubelet[2377]: E0912 17:46:33.083506 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:33.085830 kubelet[2377]: E0912 17:46:33.085808 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:33.085956 kubelet[2377]: E0912 17:46:33.085936 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:33.088124 kubelet[2377]: E0912 17:46:33.088100 2377 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:46:33.088209 kubelet[2377]: E0912 17:46:33.088187 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:33.615549 kubelet[2377]: I0912 17:46:33.615372 2377 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:46:33.989300 kubelet[2377]: I0912 17:46:33.989252 2377 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:46:33.989300 kubelet[2377]: E0912 17:46:33.989287 2377 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:46:34.000154 kubelet[2377]: I0912 17:46:34.000123 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:34.004627 kubelet[2377]: E0912 17:46:34.004561 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:34.004627 kubelet[2377]: I0912 17:46:34.004622 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:34.005765 kubelet[2377]: E0912 17:46:34.005741 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:34.005765 kubelet[2377]: I0912 17:46:34.005759 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:34.007237 kubelet[2377]: E0912 17:46:34.007207 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:34.089842 kubelet[2377]: I0912 17:46:34.089699 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:34.090611 kubelet[2377]: I0912 17:46:34.090048 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:34.090611 kubelet[2377]: I0912 17:46:34.090205 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:34.092024 kubelet[2377]: E0912 17:46:34.091992 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:34.092165 kubelet[2377]: E0912 17:46:34.092141 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:34.092460 kubelet[2377]: E0912 17:46:34.092440 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:34.092602 kubelet[2377]: E0912 17:46:34.092550 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:34.092634 kubelet[2377]: E0912 17:46:34.092610 2377 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:34.092707 kubelet[2377]: E0912 17:46:34.092685 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:34.586953 kubelet[2377]: I0912 17:46:34.586902 2377 apiserver.go:52] "Watching apiserver" Sep 12 17:46:34.601196 kubelet[2377]: I0912 17:46:34.601139 2377 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:46:35.092689 kubelet[2377]: I0912 17:46:35.092645 2377 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:35.098166 kubelet[2377]: E0912 17:46:35.098143 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:36.093232 kubelet[2377]: E0912 17:46:36.093183 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:36.223901 systemd[1]: Reload requested from client PID 2666 ('systemctl') (unit session-7.scope)... Sep 12 17:46:36.223917 systemd[1]: Reloading... Sep 12 17:46:36.306758 zram_generator::config[2712]: No configuration found. Sep 12 17:46:36.532207 systemd[1]: Reloading finished in 307 ms. Sep 12 17:46:36.563376 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:36.588291 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:46:36.588620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:36.588679 systemd[1]: kubelet.service: Consumed 828ms CPU time, 131.7M memory peak. Sep 12 17:46:36.590701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:46:36.807788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:46:36.818114 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:46:36.858450 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:46:36.858450 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:46:36.858450 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:46:36.858918 kubelet[2754]: I0912 17:46:36.858475 2754 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:46:36.864744 kubelet[2754]: I0912 17:46:36.864556 2754 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:46:36.864744 kubelet[2754]: I0912 17:46:36.864652 2754 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:46:36.865082 kubelet[2754]: I0912 17:46:36.865052 2754 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:46:36.866755 kubelet[2754]: I0912 17:46:36.866736 2754 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:46:36.868857 kubelet[2754]: I0912 17:46:36.868832 2754 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:46:36.872213 kubelet[2754]: I0912 17:46:36.872189 2754 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:46:36.876843 kubelet[2754]: I0912 17:46:36.876825 2754 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:46:36.877060 kubelet[2754]: I0912 17:46:36.877036 2754 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:46:36.877204 kubelet[2754]: I0912 17:46:36.877055 2754 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:46:36.877304 kubelet[2754]: I0912 17:46:36.877209 2754 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:46:36.877304 kubelet[2754]: I0912 17:46:36.877217 2754 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:46:36.877304 kubelet[2754]: I0912 17:46:36.877261 2754 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:46:36.877428 kubelet[2754]: I0912 17:46:36.877415 2754 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:46:36.877474 kubelet[2754]: I0912 17:46:36.877429 2754 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:46:36.877474 kubelet[2754]: I0912 17:46:36.877449 2754 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:46:36.877474 kubelet[2754]: I0912 17:46:36.877464 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:46:36.879255 kubelet[2754]: I0912 17:46:36.879233 2754 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:46:36.879654 kubelet[2754]: I0912 17:46:36.879635 2754 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:46:36.882355 kubelet[2754]: I0912 17:46:36.882336 2754 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:46:36.882404 kubelet[2754]: I0912 17:46:36.882380 2754 server.go:1289] "Started kubelet" Sep 12 17:46:36.884321 kubelet[2754]: I0912 17:46:36.884301 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:46:36.887056 kubelet[2754]: I0912 17:46:36.886675 2754 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:46:36.887947 kubelet[2754]: I0912 17:46:36.887900 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:46:36.888915 kubelet[2754]: I0912 17:46:36.888898 2754 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:46:36.892579 kubelet[2754]: I0912 17:46:36.892558 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:46:36.893475 kubelet[2754]: I0912 17:46:36.893445 2754 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:46:36.894614 kubelet[2754]: I0912 17:46:36.894406 2754 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:46:36.894614 kubelet[2754]: E0912 17:46:36.894580 2754 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:46:36.896600 kubelet[2754]: I0912 17:46:36.896424 2754 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:46:36.896600 kubelet[2754]: I0912 17:46:36.896533 2754 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:46:36.896673 kubelet[2754]: I0912 17:46:36.896603 2754 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:46:36.896838 kubelet[2754]: I0912 17:46:36.896701 2754 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:46:36.897743 kubelet[2754]: E0912 17:46:36.897346 2754 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:46:36.898138 kubelet[2754]: I0912 17:46:36.898104 2754 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:46:36.904185 kubelet[2754]: I0912 17:46:36.904136 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:46:36.905541 kubelet[2754]: I0912 17:46:36.905516 2754 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:46:36.905541 kubelet[2754]: I0912 17:46:36.905536 2754 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:46:36.905620 kubelet[2754]: I0912 17:46:36.905556 2754 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:46:36.905620 kubelet[2754]: I0912 17:46:36.905564 2754 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:46:36.905666 kubelet[2754]: E0912 17:46:36.905605 2754 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:46:36.934746 kubelet[2754]: I0912 17:46:36.934697 2754 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:46:36.934746 kubelet[2754]: I0912 17:46:36.934719 2754 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:46:36.934938 kubelet[2754]: I0912 17:46:36.934778 2754 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:46:36.935010 kubelet[2754]: I0912 17:46:36.934970 2754 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:46:36.935010 kubelet[2754]: I0912 17:46:36.935000 2754 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:46:36.935101 kubelet[2754]: I0912 17:46:36.935019 2754 policy_none.go:49] "None policy: Start" Sep 12 17:46:36.935101 kubelet[2754]: I0912 17:46:36.935039 2754 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:46:36.935101 kubelet[2754]: I0912 17:46:36.935052 2754 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:46:36.935167 kubelet[2754]: I0912 17:46:36.935143 2754 state_mem.go:75] "Updated machine memory state" Sep 12 17:46:36.939508 kubelet[2754]: E0912 17:46:36.939135 2754 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:46:36.939508 kubelet[2754]: I0912 17:46:36.939293 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:46:36.939508 kubelet[2754]: I0912 17:46:36.939303 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:46:36.939508 kubelet[2754]: I0912 17:46:36.939460 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:46:36.940129 kubelet[2754]: E0912 17:46:36.940101 2754 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:46:37.007296 kubelet[2754]: I0912 17:46:37.007245 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.007592 kubelet[2754]: I0912 17:46:37.007488 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:37.007711 kubelet[2754]: I0912 17:46:37.007695 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.012775 kubelet[2754]: E0912 17:46:37.012714 2754 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.044267 kubelet[2754]: I0912 17:46:37.044241 2754 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:46:37.049377 kubelet[2754]: I0912 17:46:37.049355 2754 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:46:37.049443 kubelet[2754]: I0912 17:46:37.049424 2754 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:46:37.097953 kubelet[2754]: I0912 17:46:37.097834 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.097953 kubelet[2754]: I0912 17:46:37.097864 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.097953 kubelet[2754]: I0912 17:46:37.097883 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.097953 kubelet[2754]: I0912 17:46:37.097897 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:46:37.097953 kubelet[2754]: I0912 17:46:37.097911 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.098183 kubelet[2754]: I0912 17:46:37.097924 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.098183 kubelet[2754]: I0912 17:46:37.097939 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.098183 kubelet[2754]: I0912 17:46:37.097957 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:46:37.098183 kubelet[2754]: I0912 17:46:37.097971 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28737af9f28b9523d51c0aa905ce175f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"28737af9f28b9523d51c0aa905ce175f\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.311862 kubelet[2754]: E0912 17:46:37.311819 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.312856 kubelet[2754]: E0912 17:46:37.312786 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.313211 kubelet[2754]: E0912 17:46:37.313150 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.878362 kubelet[2754]: I0912 17:46:37.878301 2754 apiserver.go:52] "Watching apiserver" Sep 12 17:46:37.897409 kubelet[2754]: I0912 17:46:37.897385 2754 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:46:37.919189 kubelet[2754]: E0912 17:46:37.918865 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.919189 kubelet[2754]: I0912 17:46:37.918912 2754 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.919189 kubelet[2754]: E0912 17:46:37.918924 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.923645 kubelet[2754]: E0912 17:46:37.923614 2754 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:46:37.923957 kubelet[2754]: E0912 17:46:37.923928 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:37.939945 kubelet[2754]: I0912 17:46:37.939860 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.939833989 podStartE2EDuration="939.833989ms" podCreationTimestamp="2025-09-12 17:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:37.933883803 +0000 UTC m=+1.111506776" watchObservedRunningTime="2025-09-12 17:46:37.939833989 +0000 UTC m=+1.117456962" Sep 12 17:46:37.948922 kubelet[2754]: I0912 17:46:37.948861 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.948840316 podStartE2EDuration="948.840316ms" podCreationTimestamp="2025-09-12 17:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:37.940355116 +0000 UTC m=+1.117978089" watchObservedRunningTime="2025-09-12 17:46:37.948840316 +0000 UTC m=+1.126463289" Sep 12 17:46:37.955560 kubelet[2754]: I0912 17:46:37.955487 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.955470177 podStartE2EDuration="2.955470177s" podCreationTimestamp="2025-09-12 17:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:37.948968547 +0000 UTC m=+1.126591520" watchObservedRunningTime="2025-09-12 17:46:37.955470177 +0000 UTC m=+1.133093140" Sep 12 17:46:38.921445 kubelet[2754]: E0912 17:46:38.921376 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:38.922688 kubelet[2754]: E0912 17:46:38.921873 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:43.193851 kubelet[2754]: I0912 17:46:43.193807 2754 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:46:43.194389 kubelet[2754]: I0912 17:46:43.194336 2754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:46:43.194426 containerd[1597]: time="2025-09-12T17:46:43.194125201Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:46:43.248822 kubelet[2754]: E0912 17:46:43.248755 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:43.927056 kubelet[2754]: E0912 17:46:43.927019 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:44.024393 systemd[1]: Created slice kubepods-besteffort-podfff22d7c_ad96_40e9_8e8e_02942a8e1d2f.slice - libcontainer container kubepods-besteffort-podfff22d7c_ad96_40e9_8e8e_02942a8e1d2f.slice. Sep 12 17:46:44.041595 kubelet[2754]: I0912 17:46:44.041558 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fff22d7c-ad96-40e9-8e8e-02942a8e1d2f-kube-proxy\") pod \"kube-proxy-h25wj\" (UID: \"fff22d7c-ad96-40e9-8e8e-02942a8e1d2f\") " pod="kube-system/kube-proxy-h25wj" Sep 12 17:46:44.041595 kubelet[2754]: I0912 17:46:44.041594 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fff22d7c-ad96-40e9-8e8e-02942a8e1d2f-xtables-lock\") pod \"kube-proxy-h25wj\" (UID: \"fff22d7c-ad96-40e9-8e8e-02942a8e1d2f\") " pod="kube-system/kube-proxy-h25wj" Sep 12 17:46:44.041595 kubelet[2754]: I0912 17:46:44.041611 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fff22d7c-ad96-40e9-8e8e-02942a8e1d2f-lib-modules\") pod \"kube-proxy-h25wj\" (UID: \"fff22d7c-ad96-40e9-8e8e-02942a8e1d2f\") " pod="kube-system/kube-proxy-h25wj" Sep 12 17:46:44.041847 kubelet[2754]: I0912 17:46:44.041625 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbcdj\" (UniqueName: \"kubernetes.io/projected/fff22d7c-ad96-40e9-8e8e-02942a8e1d2f-kube-api-access-dbcdj\") pod \"kube-proxy-h25wj\" (UID: \"fff22d7c-ad96-40e9-8e8e-02942a8e1d2f\") " pod="kube-system/kube-proxy-h25wj" Sep 12 17:46:44.338535 systemd[1]: Created slice kubepods-besteffort-pode1191ece_d4c8_4b7c_a184_c3d7d5145f69.slice - libcontainer container kubepods-besteffort-pode1191ece_d4c8_4b7c_a184_c3d7d5145f69.slice. Sep 12 17:46:44.342946 kubelet[2754]: I0912 17:46:44.342895 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bkj\" (UniqueName: \"kubernetes.io/projected/e1191ece-d4c8-4b7c-a184-c3d7d5145f69-kube-api-access-m4bkj\") pod \"tigera-operator-755d956888-jlsxc\" (UID: \"e1191ece-d4c8-4b7c-a184-c3d7d5145f69\") " pod="tigera-operator/tigera-operator-755d956888-jlsxc" Sep 12 17:46:44.342946 kubelet[2754]: I0912 17:46:44.342938 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1191ece-d4c8-4b7c-a184-c3d7d5145f69-var-lib-calico\") pod \"tigera-operator-755d956888-jlsxc\" (UID: \"e1191ece-d4c8-4b7c-a184-c3d7d5145f69\") " pod="tigera-operator/tigera-operator-755d956888-jlsxc" Sep 12 17:46:44.343376 kubelet[2754]: E0912 17:46:44.343103 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:44.343717 containerd[1597]: time="2025-09-12T17:46:44.343677698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h25wj,Uid:fff22d7c-ad96-40e9-8e8e-02942a8e1d2f,Namespace:kube-system,Attempt:0,}" Sep 12 17:46:44.365404 containerd[1597]: time="2025-09-12T17:46:44.365346154Z" level=info msg="connecting to shim 7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3" address="unix:///run/containerd/s/51ce4322c16ffcc97bf33c7b5681590e7daa43c7a48a5446cf4f25c879d29f25" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:44.397895 systemd[1]: Started cri-containerd-7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3.scope - libcontainer container 7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3. Sep 12 17:46:44.426553 containerd[1597]: time="2025-09-12T17:46:44.426509138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h25wj,Uid:fff22d7c-ad96-40e9-8e8e-02942a8e1d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3\"" Sep 12 17:46:44.427032 kubelet[2754]: E0912 17:46:44.427009 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:44.431874 containerd[1597]: time="2025-09-12T17:46:44.431837012Z" level=info msg="CreateContainer within sandbox \"7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:46:44.448169 containerd[1597]: time="2025-09-12T17:46:44.447498038Z" level=info msg="Container 73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:44.457307 containerd[1597]: time="2025-09-12T17:46:44.457249282Z" level=info msg="CreateContainer within sandbox \"7899ba4110eafc1a936890a527942b88f84f418702e0a8b715f4f467bc2a23d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249\"" Sep 12 17:46:44.457975 containerd[1597]: time="2025-09-12T17:46:44.457927765Z" level=info msg="StartContainer for \"73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249\"" Sep 12 17:46:44.459708 containerd[1597]: time="2025-09-12T17:46:44.459676138Z" level=info msg="connecting to shim 73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249" address="unix:///run/containerd/s/51ce4322c16ffcc97bf33c7b5681590e7daa43c7a48a5446cf4f25c879d29f25" protocol=ttrpc version=3 Sep 12 17:46:44.481867 systemd[1]: Started cri-containerd-73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249.scope - libcontainer container 73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249. Sep 12 17:46:44.526813 containerd[1597]: time="2025-09-12T17:46:44.526617143Z" level=info msg="StartContainer for \"73301215c2f7d5cca4078ad138641299f68c79481d26e157c4fca54c4f11d249\" returns successfully" Sep 12 17:46:44.641919 containerd[1597]: time="2025-09-12T17:46:44.641863209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jlsxc,Uid:e1191ece-d4c8-4b7c-a184-c3d7d5145f69,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:46:44.659103 containerd[1597]: time="2025-09-12T17:46:44.659052758Z" level=info msg="connecting to shim 818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce" address="unix:///run/containerd/s/b32c12c67f358d79af63da2a885d5f609925f76fdf5efa13a8c6c0c42c0ef004" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:44.685845 systemd[1]: Started cri-containerd-818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce.scope - libcontainer container 818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce. Sep 12 17:46:44.731130 containerd[1597]: time="2025-09-12T17:46:44.731067193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-jlsxc,Uid:e1191ece-d4c8-4b7c-a184-c3d7d5145f69,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce\"" Sep 12 17:46:44.732581 containerd[1597]: time="2025-09-12T17:46:44.732529309Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:46:44.932063 kubelet[2754]: E0912 17:46:44.931862 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:44.932702 kubelet[2754]: E0912 17:46:44.932634 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:45.025351 kubelet[2754]: E0912 17:46:45.025260 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:45.036429 kubelet[2754]: I0912 17:46:45.036369 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h25wj" podStartSLOduration=1.036328843 podStartE2EDuration="1.036328843s" podCreationTimestamp="2025-09-12 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:46:44.94306125 +0000 UTC m=+8.120684223" watchObservedRunningTime="2025-09-12 17:46:45.036328843 +0000 UTC m=+8.213951816" Sep 12 17:46:45.933194 kubelet[2754]: E0912 17:46:45.933142 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:46.219353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013561790.mount: Deactivated successfully. Sep 12 17:46:46.550771 containerd[1597]: time="2025-09-12T17:46:46.550632788Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:46.551658 containerd[1597]: time="2025-09-12T17:46:46.551609847Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:46:46.552700 containerd[1597]: time="2025-09-12T17:46:46.552661587Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:46.554599 containerd[1597]: time="2025-09-12T17:46:46.554555910Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:46:46.555197 containerd[1597]: time="2025-09-12T17:46:46.555152876Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.822598248s" Sep 12 17:46:46.555230 containerd[1597]: time="2025-09-12T17:46:46.555193032Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:46:46.559651 containerd[1597]: time="2025-09-12T17:46:46.559594845Z" level=info msg="CreateContainer within sandbox \"818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:46:46.566271 containerd[1597]: time="2025-09-12T17:46:46.566224996Z" level=info msg="Container 3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:46.573529 containerd[1597]: time="2025-09-12T17:46:46.573489754Z" level=info msg="CreateContainer within sandbox \"818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\"" Sep 12 17:46:46.573910 containerd[1597]: time="2025-09-12T17:46:46.573874846Z" level=info msg="StartContainer for \"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\"" Sep 12 17:46:46.574668 containerd[1597]: time="2025-09-12T17:46:46.574641605Z" level=info msg="connecting to shim 3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d" address="unix:///run/containerd/s/b32c12c67f358d79af63da2a885d5f609925f76fdf5efa13a8c6c0c42c0ef004" protocol=ttrpc version=3 Sep 12 17:46:46.640866 systemd[1]: Started cri-containerd-3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d.scope - libcontainer container 3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d. Sep 12 17:46:46.674257 containerd[1597]: time="2025-09-12T17:46:46.674212795Z" level=info msg="StartContainer for \"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\" returns successfully" Sep 12 17:46:46.944501 kubelet[2754]: I0912 17:46:46.944434 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-jlsxc" podStartSLOduration=1.120747756 podStartE2EDuration="2.944412824s" podCreationTimestamp="2025-09-12 17:46:44 +0000 UTC" firstStartedPulling="2025-09-12 17:46:44.732281477 +0000 UTC m=+7.909904450" lastFinishedPulling="2025-09-12 17:46:46.555946545 +0000 UTC m=+9.733569518" observedRunningTime="2025-09-12 17:46:46.944088256 +0000 UTC m=+10.121711249" watchObservedRunningTime="2025-09-12 17:46:46.944412824 +0000 UTC m=+10.122035807" Sep 12 17:46:47.657662 kubelet[2754]: E0912 17:46:47.657583 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:47.940836 kubelet[2754]: E0912 17:46:47.938640 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:48.669601 systemd[1]: cri-containerd-3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d.scope: Deactivated successfully. Sep 12 17:46:48.671200 containerd[1597]: time="2025-09-12T17:46:48.671153882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\" id:\"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\" pid:3084 exit_status:1 exited_at:{seconds:1757699208 nanos:670515640}" Sep 12 17:46:48.671519 containerd[1597]: time="2025-09-12T17:46:48.671199809Z" level=info msg="received exit event container_id:\"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\" id:\"3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d\" pid:3084 exit_status:1 exited_at:{seconds:1757699208 nanos:670515640}" Sep 12 17:46:48.702176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d-rootfs.mount: Deactivated successfully. Sep 12 17:46:49.946441 kubelet[2754]: I0912 17:46:49.946389 2754 scope.go:117] "RemoveContainer" containerID="3db6d934321fa825761069654603063df25e9a65e680323532f2ff0a2241f87d" Sep 12 17:46:49.950948 containerd[1597]: time="2025-09-12T17:46:49.950902281Z" level=info msg="CreateContainer within sandbox \"818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:46:49.973757 containerd[1597]: time="2025-09-12T17:46:49.972757788Z" level=info msg="Container 262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:46:49.987666 containerd[1597]: time="2025-09-12T17:46:49.987235716Z" level=info msg="CreateContainer within sandbox \"818eff00794b581cdc347de9f7493c7167ac0a27ffaf075c96d68a3dcbd127ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8\"" Sep 12 17:46:49.988749 containerd[1597]: time="2025-09-12T17:46:49.988173335Z" level=info msg="StartContainer for \"262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8\"" Sep 12 17:46:49.989644 containerd[1597]: time="2025-09-12T17:46:49.989611133Z" level=info msg="connecting to shim 262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8" address="unix:///run/containerd/s/b32c12c67f358d79af63da2a885d5f609925f76fdf5efa13a8c6c0c42c0ef004" protocol=ttrpc version=3 Sep 12 17:46:50.013880 systemd[1]: Started cri-containerd-262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8.scope - libcontainer container 262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8. Sep 12 17:46:50.047433 containerd[1597]: time="2025-09-12T17:46:50.047388611Z" level=info msg="StartContainer for \"262b335d87d437965a0a357b731210e857ac0b1b5b7ad37e37083805bdcf6de8\" returns successfully" Sep 12 17:46:51.718945 update_engine[1588]: I20250912 17:46:51.718828 1588 update_attempter.cc:509] Updating boot flags... Sep 12 17:46:52.365541 sudo[1812]: pam_unix(sudo:session): session closed for user root Sep 12 17:46:52.367637 sshd[1811]: Connection closed by 10.0.0.1 port 46914 Sep 12 17:46:52.368421 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Sep 12 17:46:52.373141 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:46914.service: Deactivated successfully. Sep 12 17:46:52.375746 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:46:52.375994 systemd[1]: session-7.scope: Consumed 5.935s CPU time, 222.4M memory peak. Sep 12 17:46:52.379198 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:46:52.380332 systemd-logind[1585]: Removed session 7. Sep 12 17:46:56.292931 systemd[1]: Created slice kubepods-besteffort-podde71dbb6_2176_49a1_9446_715e45a091b6.slice - libcontainer container kubepods-besteffort-podde71dbb6_2176_49a1_9446_715e45a091b6.slice. Sep 12 17:46:56.329023 kubelet[2754]: I0912 17:46:56.328932 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de71dbb6-2176-49a1-9446-715e45a091b6-tigera-ca-bundle\") pod \"calico-typha-6f49949f5f-jdxwh\" (UID: \"de71dbb6-2176-49a1-9446-715e45a091b6\") " pod="calico-system/calico-typha-6f49949f5f-jdxwh" Sep 12 17:46:56.329023 kubelet[2754]: I0912 17:46:56.328993 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vt88\" (UniqueName: \"kubernetes.io/projected/de71dbb6-2176-49a1-9446-715e45a091b6-kube-api-access-6vt88\") pod \"calico-typha-6f49949f5f-jdxwh\" (UID: \"de71dbb6-2176-49a1-9446-715e45a091b6\") " pod="calico-system/calico-typha-6f49949f5f-jdxwh" Sep 12 17:46:56.329023 kubelet[2754]: I0912 17:46:56.329011 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de71dbb6-2176-49a1-9446-715e45a091b6-typha-certs\") pod \"calico-typha-6f49949f5f-jdxwh\" (UID: \"de71dbb6-2176-49a1-9446-715e45a091b6\") " pod="calico-system/calico-typha-6f49949f5f-jdxwh" Sep 12 17:46:56.629888 kubelet[2754]: E0912 17:46:56.629425 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:56.630478 containerd[1597]: time="2025-09-12T17:46:56.630036576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f49949f5f-jdxwh,Uid:de71dbb6-2176-49a1-9446-715e45a091b6,Namespace:calico-system,Attempt:0,}" Sep 12 17:46:56.911808 systemd[1]: Created slice kubepods-besteffort-podf524bacd_62a9_463a_9e98_d7c9b0458793.slice - libcontainer container kubepods-besteffort-podf524bacd_62a9_463a_9e98_d7c9b0458793.slice. Sep 12 17:46:56.969792 containerd[1597]: time="2025-09-12T17:46:56.969351558Z" level=info msg="connecting to shim 67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3" address="unix:///run/containerd/s/604f4cb7bb24e867d6f423ab61320e8b163dbd9a49ce818a0dd26c45784ea61e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:57.009896 systemd[1]: Started cri-containerd-67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3.scope - libcontainer container 67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3. Sep 12 17:46:57.033202 kubelet[2754]: I0912 17:46:57.033148 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-var-lib-calico\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033202 kubelet[2754]: I0912 17:46:57.033199 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-flexvol-driver-host\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033202 kubelet[2754]: I0912 17:46:57.033218 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-var-run-calico\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033438 kubelet[2754]: I0912 17:46:57.033241 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-cni-bin-dir\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033438 kubelet[2754]: I0912 17:46:57.033258 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-cni-log-dir\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033438 kubelet[2754]: I0912 17:46:57.033271 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-lib-modules\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033438 kubelet[2754]: I0912 17:46:57.033294 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-policysync\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.033438 kubelet[2754]: I0912 17:46:57.033307 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-cni-net-dir\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.034108 kubelet[2754]: I0912 17:46:57.033321 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f524bacd-62a9-463a-9e98-d7c9b0458793-node-certs\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.034108 kubelet[2754]: I0912 17:46:57.033335 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f524bacd-62a9-463a-9e98-d7c9b0458793-xtables-lock\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.034108 kubelet[2754]: I0912 17:46:57.033349 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5bx\" (UniqueName: \"kubernetes.io/projected/f524bacd-62a9-463a-9e98-d7c9b0458793-kube-api-access-8t5bx\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.034108 kubelet[2754]: I0912 17:46:57.033364 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f524bacd-62a9-463a-9e98-d7c9b0458793-tigera-ca-bundle\") pod \"calico-node-h8lxt\" (UID: \"f524bacd-62a9-463a-9e98-d7c9b0458793\") " pod="calico-system/calico-node-h8lxt" Sep 12 17:46:57.136616 kubelet[2754]: E0912 17:46:57.136567 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.136616 kubelet[2754]: W0912 17:46:57.136601 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.609329 kubelet[2754]: E0912 17:46:57.609134 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.610752 kubelet[2754]: E0912 17:46:57.609580 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.610752 kubelet[2754]: W0912 17:46:57.609624 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.610752 kubelet[2754]: E0912 17:46:57.609648 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.610752 kubelet[2754]: E0912 17:46:57.609952 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.610752 kubelet[2754]: W0912 17:46:57.609966 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.610752 kubelet[2754]: E0912 17:46:57.609979 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.611270 kubelet[2754]: E0912 17:46:57.610856 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.611270 kubelet[2754]: W0912 17:46:57.610869 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.611270 kubelet[2754]: E0912 17:46:57.610882 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.612109 kubelet[2754]: E0912 17:46:57.612089 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.612109 kubelet[2754]: W0912 17:46:57.612104 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.612212 kubelet[2754]: E0912 17:46:57.612117 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.612694 kubelet[2754]: E0912 17:46:57.612655 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.612694 kubelet[2754]: W0912 17:46:57.612671 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.612694 kubelet[2754]: E0912 17:46:57.612685 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.613005 kubelet[2754]: E0912 17:46:57.612987 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.613005 kubelet[2754]: W0912 17:46:57.613001 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.613089 kubelet[2754]: E0912 17:46:57.613013 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.613323 kubelet[2754]: E0912 17:46:57.613305 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.613323 kubelet[2754]: W0912 17:46:57.613318 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.613411 kubelet[2754]: E0912 17:46:57.613329 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.613544 kubelet[2754]: E0912 17:46:57.613525 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.613544 kubelet[2754]: W0912 17:46:57.613536 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.613624 kubelet[2754]: E0912 17:46:57.613547 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.613786 kubelet[2754]: E0912 17:46:57.613767 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.613786 kubelet[2754]: W0912 17:46:57.613779 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.613878 kubelet[2754]: E0912 17:46:57.613789 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.613995 kubelet[2754]: E0912 17:46:57.613977 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.613995 kubelet[2754]: W0912 17:46:57.613988 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.614067 kubelet[2754]: E0912 17:46:57.613999 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.614187 kubelet[2754]: E0912 17:46:57.614171 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.614187 kubelet[2754]: W0912 17:46:57.614182 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.614267 kubelet[2754]: E0912 17:46:57.614193 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.614457 kubelet[2754]: E0912 17:46:57.614438 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.614457 kubelet[2754]: W0912 17:46:57.614451 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.614542 kubelet[2754]: E0912 17:46:57.614462 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.614683 kubelet[2754]: E0912 17:46:57.614665 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.614683 kubelet[2754]: W0912 17:46:57.614677 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.614796 kubelet[2754]: E0912 17:46:57.614687 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.614935 kubelet[2754]: E0912 17:46:57.614916 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.614935 kubelet[2754]: W0912 17:46:57.614930 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615006 kubelet[2754]: E0912 17:46:57.614941 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.615162 kubelet[2754]: E0912 17:46:57.615137 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.615162 kubelet[2754]: W0912 17:46:57.615147 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615162 kubelet[2754]: E0912 17:46:57.615155 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.615342 kubelet[2754]: E0912 17:46:57.615326 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.615342 kubelet[2754]: W0912 17:46:57.615336 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615411 kubelet[2754]: E0912 17:46:57.615344 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.615530 kubelet[2754]: E0912 17:46:57.615514 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.615530 kubelet[2754]: W0912 17:46:57.615525 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615606 kubelet[2754]: E0912 17:46:57.615536 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.615767 kubelet[2754]: E0912 17:46:57.615751 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.615767 kubelet[2754]: W0912 17:46:57.615760 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615767 kubelet[2754]: E0912 17:46:57.615768 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.615970 kubelet[2754]: E0912 17:46:57.615955 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.615970 kubelet[2754]: W0912 17:46:57.615964 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.615970 kubelet[2754]: E0912 17:46:57.615972 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.616287 kubelet[2754]: E0912 17:46:57.616267 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.616287 kubelet[2754]: W0912 17:46:57.616280 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.616378 kubelet[2754]: E0912 17:46:57.616291 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.622462 kubelet[2754]: E0912 17:46:57.622425 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.622462 kubelet[2754]: W0912 17:46:57.622450 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.622573 kubelet[2754]: E0912 17:46:57.622475 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.709414 containerd[1597]: time="2025-09-12T17:46:57.709337818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f49949f5f-jdxwh,Uid:de71dbb6-2176-49a1-9446-715e45a091b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3\"" Sep 12 17:46:57.710434 kubelet[2754]: E0912 17:46:57.710402 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:46:57.711376 containerd[1597]: time="2025-09-12T17:46:57.711349776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:46:57.735018 kubelet[2754]: E0912 17:46:57.734934 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:46:57.750481 kubelet[2754]: E0912 17:46:57.750429 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.750481 kubelet[2754]: W0912 17:46:57.750461 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.750481 kubelet[2754]: E0912 17:46:57.750491 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.750812 kubelet[2754]: E0912 17:46:57.750762 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.750812 kubelet[2754]: W0912 17:46:57.750774 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.750812 kubelet[2754]: E0912 17:46:57.750786 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.751013 kubelet[2754]: E0912 17:46:57.750988 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.751013 kubelet[2754]: W0912 17:46:57.751001 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.751013 kubelet[2754]: E0912 17:46:57.751012 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.751268 kubelet[2754]: E0912 17:46:57.751249 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.751268 kubelet[2754]: W0912 17:46:57.751264 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.751340 kubelet[2754]: E0912 17:46:57.751274 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.751504 kubelet[2754]: E0912 17:46:57.751477 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.751504 kubelet[2754]: W0912 17:46:57.751493 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.751587 kubelet[2754]: E0912 17:46:57.751506 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.751786 kubelet[2754]: E0912 17:46:57.751748 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.751786 kubelet[2754]: W0912 17:46:57.751764 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.751786 kubelet[2754]: E0912 17:46:57.751776 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.751975 kubelet[2754]: E0912 17:46:57.751960 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.751975 kubelet[2754]: W0912 17:46:57.751971 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.752048 kubelet[2754]: E0912 17:46:57.751986 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.752169 kubelet[2754]: E0912 17:46:57.752156 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.752169 kubelet[2754]: W0912 17:46:57.752166 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.752226 kubelet[2754]: E0912 17:46:57.752174 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.752336 kubelet[2754]: E0912 17:46:57.752323 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.752336 kubelet[2754]: W0912 17:46:57.752333 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.752406 kubelet[2754]: E0912 17:46:57.752341 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.752498 kubelet[2754]: E0912 17:46:57.752485 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.752498 kubelet[2754]: W0912 17:46:57.752494 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.752574 kubelet[2754]: E0912 17:46:57.752502 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.752781 kubelet[2754]: E0912 17:46:57.752766 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.752781 kubelet[2754]: W0912 17:46:57.752778 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.752885 kubelet[2754]: E0912 17:46:57.752788 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.752954 kubelet[2754]: E0912 17:46:57.752941 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.752954 kubelet[2754]: W0912 17:46:57.752950 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753033 kubelet[2754]: E0912 17:46:57.752958 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.753113 kubelet[2754]: E0912 17:46:57.753100 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.753113 kubelet[2754]: W0912 17:46:57.753109 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753228 kubelet[2754]: E0912 17:46:57.753117 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.753260 kubelet[2754]: E0912 17:46:57.753253 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.753287 kubelet[2754]: W0912 17:46:57.753260 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753287 kubelet[2754]: E0912 17:46:57.753268 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.753418 kubelet[2754]: E0912 17:46:57.753404 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.753454 kubelet[2754]: W0912 17:46:57.753434 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753454 kubelet[2754]: E0912 17:46:57.753443 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.753619 kubelet[2754]: E0912 17:46:57.753605 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.753619 kubelet[2754]: W0912 17:46:57.753614 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753686 kubelet[2754]: E0912 17:46:57.753622 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.753828 kubelet[2754]: E0912 17:46:57.753814 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.753828 kubelet[2754]: W0912 17:46:57.753824 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.753914 kubelet[2754]: E0912 17:46:57.753833 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.754025 kubelet[2754]: E0912 17:46:57.754012 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.754025 kubelet[2754]: W0912 17:46:57.754021 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.754093 kubelet[2754]: E0912 17:46:57.754030 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.754175 kubelet[2754]: E0912 17:46:57.754162 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.754175 kubelet[2754]: W0912 17:46:57.754172 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.754239 kubelet[2754]: E0912 17:46:57.754180 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.754342 kubelet[2754]: E0912 17:46:57.754329 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.754342 kubelet[2754]: W0912 17:46:57.754337 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.754406 kubelet[2754]: E0912 17:46:57.754345 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.817256 containerd[1597]: time="2025-09-12T17:46:57.817202439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8lxt,Uid:f524bacd-62a9-463a-9e98-d7c9b0458793,Namespace:calico-system,Attempt:0,}" Sep 12 17:46:57.840697 kubelet[2754]: E0912 17:46:57.840674 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.840697 kubelet[2754]: W0912 17:46:57.840691 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.840835 kubelet[2754]: E0912 17:46:57.840712 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.840835 kubelet[2754]: I0912 17:46:57.840769 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e0c15c0-7f27-4581-beb0-d93114983a4f-socket-dir\") pod \"csi-node-driver-9vzb4\" (UID: \"9e0c15c0-7f27-4581-beb0-d93114983a4f\") " pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:46:57.840998 kubelet[2754]: E0912 17:46:57.840968 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.840998 kubelet[2754]: W0912 17:46:57.840983 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.840998 kubelet[2754]: E0912 17:46:57.840995 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.841091 kubelet[2754]: I0912 17:46:57.841018 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e0c15c0-7f27-4581-beb0-d93114983a4f-varrun\") pod \"csi-node-driver-9vzb4\" (UID: \"9e0c15c0-7f27-4581-beb0-d93114983a4f\") " pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:46:57.841330 kubelet[2754]: E0912 17:46:57.841314 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.841330 kubelet[2754]: W0912 17:46:57.841328 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.841402 kubelet[2754]: E0912 17:46:57.841338 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.841531 kubelet[2754]: E0912 17:46:57.841516 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.841565 kubelet[2754]: W0912 17:46:57.841529 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.841565 kubelet[2754]: E0912 17:46:57.841540 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.841793 kubelet[2754]: E0912 17:46:57.841779 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.841793 kubelet[2754]: W0912 17:46:57.841791 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.841853 kubelet[2754]: E0912 17:46:57.841801 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.841853 kubelet[2754]: I0912 17:46:57.841823 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkjm\" (UniqueName: \"kubernetes.io/projected/9e0c15c0-7f27-4581-beb0-d93114983a4f-kube-api-access-9bkjm\") pod \"csi-node-driver-9vzb4\" (UID: \"9e0c15c0-7f27-4581-beb0-d93114983a4f\") " pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:46:57.842012 kubelet[2754]: E0912 17:46:57.841997 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.842012 kubelet[2754]: W0912 17:46:57.842009 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.842077 kubelet[2754]: E0912 17:46:57.842019 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.842191 kubelet[2754]: E0912 17:46:57.842180 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.842191 kubelet[2754]: W0912 17:46:57.842189 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.842301 kubelet[2754]: E0912 17:46:57.842197 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.842367 kubelet[2754]: E0912 17:46:57.842355 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.842367 kubelet[2754]: W0912 17:46:57.842363 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.842421 kubelet[2754]: E0912 17:46:57.842372 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.842421 kubelet[2754]: I0912 17:46:57.842390 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e0c15c0-7f27-4581-beb0-d93114983a4f-kubelet-dir\") pod \"csi-node-driver-9vzb4\" (UID: \"9e0c15c0-7f27-4581-beb0-d93114983a4f\") " pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:46:57.842579 kubelet[2754]: E0912 17:46:57.842564 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.842579 kubelet[2754]: W0912 17:46:57.842576 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.842630 kubelet[2754]: E0912 17:46:57.842585 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.842817 kubelet[2754]: E0912 17:46:57.842799 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.842847 kubelet[2754]: W0912 17:46:57.842816 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.842847 kubelet[2754]: E0912 17:46:57.842828 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.843053 kubelet[2754]: E0912 17:46:57.843039 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.843076 kubelet[2754]: W0912 17:46:57.843051 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.843076 kubelet[2754]: E0912 17:46:57.843061 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.843116 kubelet[2754]: I0912 17:46:57.843082 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e0c15c0-7f27-4581-beb0-d93114983a4f-registration-dir\") pod \"csi-node-driver-9vzb4\" (UID: \"9e0c15c0-7f27-4581-beb0-d93114983a4f\") " pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:46:57.843243 kubelet[2754]: E0912 17:46:57.843230 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.843265 kubelet[2754]: W0912 17:46:57.843241 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.843265 kubelet[2754]: E0912 17:46:57.843250 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.843412 kubelet[2754]: E0912 17:46:57.843398 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.843412 kubelet[2754]: W0912 17:46:57.843407 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.843475 kubelet[2754]: E0912 17:46:57.843415 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.843588 kubelet[2754]: E0912 17:46:57.843576 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.843588 kubelet[2754]: W0912 17:46:57.843585 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.843641 kubelet[2754]: E0912 17:46:57.843593 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.843813 kubelet[2754]: E0912 17:46:57.843800 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.843850 kubelet[2754]: W0912 17:46:57.843818 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.843850 kubelet[2754]: E0912 17:46:57.843827 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.944423 kubelet[2754]: E0912 17:46:57.944385 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.944423 kubelet[2754]: W0912 17:46:57.944405 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.944423 kubelet[2754]: E0912 17:46:57.944423 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.944631 kubelet[2754]: E0912 17:46:57.944614 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.944631 kubelet[2754]: W0912 17:46:57.944624 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.944682 kubelet[2754]: E0912 17:46:57.944634 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.944921 kubelet[2754]: E0912 17:46:57.944904 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.944921 kubelet[2754]: W0912 17:46:57.944918 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.944995 kubelet[2754]: E0912 17:46:57.944929 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.945148 kubelet[2754]: E0912 17:46:57.945119 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.945148 kubelet[2754]: W0912 17:46:57.945136 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.945148 kubelet[2754]: E0912 17:46:57.945147 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.945360 kubelet[2754]: E0912 17:46:57.945340 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.945360 kubelet[2754]: W0912 17:46:57.945354 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.945410 kubelet[2754]: E0912 17:46:57.945365 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.945593 kubelet[2754]: E0912 17:46:57.945579 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.945593 kubelet[2754]: W0912 17:46:57.945589 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.945663 kubelet[2754]: E0912 17:46:57.945597 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.945821 kubelet[2754]: E0912 17:46:57.945806 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.945821 kubelet[2754]: W0912 17:46:57.945818 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.945889 kubelet[2754]: E0912 17:46:57.945828 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.946079 kubelet[2754]: E0912 17:46:57.946063 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.946101 kubelet[2754]: W0912 17:46:57.946077 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.946101 kubelet[2754]: E0912 17:46:57.946089 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.946276 kubelet[2754]: E0912 17:46:57.946262 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.946276 kubelet[2754]: W0912 17:46:57.946274 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.946333 kubelet[2754]: E0912 17:46:57.946283 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.946487 kubelet[2754]: E0912 17:46:57.946470 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.946487 kubelet[2754]: W0912 17:46:57.946484 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.946536 kubelet[2754]: E0912 17:46:57.946494 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.946741 kubelet[2754]: E0912 17:46:57.946703 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.946741 kubelet[2754]: W0912 17:46:57.946736 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.947153 kubelet[2754]: E0912 17:46:57.947128 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.947440 kubelet[2754]: E0912 17:46:57.947425 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.947440 kubelet[2754]: W0912 17:46:57.947438 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.947491 kubelet[2754]: E0912 17:46:57.947449 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.947794 kubelet[2754]: E0912 17:46:57.947777 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.947794 kubelet[2754]: W0912 17:46:57.947790 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.947861 kubelet[2754]: E0912 17:46:57.947802 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.948038 kubelet[2754]: E0912 17:46:57.948017 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.948038 kubelet[2754]: W0912 17:46:57.948031 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.948129 kubelet[2754]: E0912 17:46:57.948042 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.948317 kubelet[2754]: E0912 17:46:57.948283 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.948317 kubelet[2754]: W0912 17:46:57.948312 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.948425 kubelet[2754]: E0912 17:46:57.948325 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.948695 kubelet[2754]: E0912 17:46:57.948674 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.948695 kubelet[2754]: W0912 17:46:57.948687 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.948825 kubelet[2754]: E0912 17:46:57.948698 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.948946 kubelet[2754]: E0912 17:46:57.948924 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.948946 kubelet[2754]: W0912 17:46:57.948937 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.949076 kubelet[2754]: E0912 17:46:57.948950 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.949291 kubelet[2754]: E0912 17:46:57.949273 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.949291 kubelet[2754]: W0912 17:46:57.949286 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.949367 kubelet[2754]: E0912 17:46:57.949297 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.949512 kubelet[2754]: E0912 17:46:57.949495 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.949512 kubelet[2754]: W0912 17:46:57.949507 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.949598 kubelet[2754]: E0912 17:46:57.949517 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.949710 kubelet[2754]: E0912 17:46:57.949693 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.949710 kubelet[2754]: W0912 17:46:57.949705 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.949830 kubelet[2754]: E0912 17:46:57.949774 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.949998 kubelet[2754]: E0912 17:46:57.949969 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.949998 kubelet[2754]: W0912 17:46:57.949981 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.949998 kubelet[2754]: E0912 17:46:57.949992 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.950257 kubelet[2754]: E0912 17:46:57.950239 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.950257 kubelet[2754]: W0912 17:46:57.950251 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.950333 kubelet[2754]: E0912 17:46:57.950262 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.950487 kubelet[2754]: E0912 17:46:57.950470 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.950487 kubelet[2754]: W0912 17:46:57.950481 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.950577 kubelet[2754]: E0912 17:46:57.950492 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:57.950697 kubelet[2754]: E0912 17:46:57.950680 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:57.950697 kubelet[2754]: W0912 17:46:57.950692 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:57.950811 kubelet[2754]: E0912 17:46:57.950703 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:58.014083 kubelet[2754]: E0912 17:46:58.014037 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:58.014083 kubelet[2754]: W0912 17:46:58.014064 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:58.014083 kubelet[2754]: E0912 17:46:58.014087 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:58.524767 kubelet[2754]: E0912 17:46:58.524702 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:46:58.524767 kubelet[2754]: W0912 17:46:58.524755 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:46:58.524945 kubelet[2754]: E0912 17:46:58.524783 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:46:58.935490 containerd[1597]: time="2025-09-12T17:46:58.935439687Z" level=info msg="connecting to shim fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd" address="unix:///run/containerd/s/901653f2269fa7c8d8b5e99b08bfc4e996e1ed11cc363c742cfc7ca0433368e4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:46:58.965869 systemd[1]: Started cri-containerd-fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd.scope - libcontainer container fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd. Sep 12 17:46:59.086796 containerd[1597]: time="2025-09-12T17:46:59.086746468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h8lxt,Uid:f524bacd-62a9-463a-9e98-d7c9b0458793,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\"" Sep 12 17:46:59.906955 kubelet[2754]: E0912 17:46:59.906894 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:00.665444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185328032.mount: Deactivated successfully. Sep 12 17:47:01.906463 kubelet[2754]: E0912 17:47:01.906397 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:02.030040 containerd[1597]: time="2025-09-12T17:47:02.029978365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:02.030991 containerd[1597]: time="2025-09-12T17:47:02.030946549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:47:02.032096 containerd[1597]: time="2025-09-12T17:47:02.032064887Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:02.034156 containerd[1597]: time="2025-09-12T17:47:02.034125070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:02.034721 containerd[1597]: time="2025-09-12T17:47:02.034687559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.323305331s" Sep 12 17:47:02.034785 containerd[1597]: time="2025-09-12T17:47:02.034719009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:47:02.035754 containerd[1597]: time="2025-09-12T17:47:02.035677796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:47:02.048751 containerd[1597]: time="2025-09-12T17:47:02.048699257Z" level=info msg="CreateContainer within sandbox \"67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:47:02.057500 containerd[1597]: time="2025-09-12T17:47:02.057448385Z" level=info msg="Container aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:02.066671 containerd[1597]: time="2025-09-12T17:47:02.066625111Z" level=info msg="CreateContainer within sandbox \"67bb327496a4fdf8efbaa5e33be6ff4225c88c604ce489f596f2c835b4ee20a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f\"" Sep 12 17:47:02.067100 containerd[1597]: time="2025-09-12T17:47:02.067070760Z" level=info msg="StartContainer for \"aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f\"" Sep 12 17:47:02.068088 containerd[1597]: time="2025-09-12T17:47:02.068062690Z" level=info msg="connecting to shim aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f" address="unix:///run/containerd/s/604f4cb7bb24e867d6f423ab61320e8b163dbd9a49ce818a0dd26c45784ea61e" protocol=ttrpc version=3 Sep 12 17:47:02.101862 systemd[1]: Started cri-containerd-aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f.scope - libcontainer container aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f. Sep 12 17:47:02.151797 containerd[1597]: time="2025-09-12T17:47:02.151751373Z" level=info msg="StartContainer for \"aff4327d15823adf52abb15297cf5b0bdb4de4a8f9b2be1e2f5764ac6200c82f\" returns successfully" Sep 12 17:47:02.976623 kubelet[2754]: E0912 17:47:02.976584 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:02.987414 kubelet[2754]: E0912 17:47:02.987380 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.987414 kubelet[2754]: W0912 17:47:02.987400 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.987542 kubelet[2754]: E0912 17:47:02.987423 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.987674 kubelet[2754]: E0912 17:47:02.987645 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.987674 kubelet[2754]: W0912 17:47:02.987664 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.987674 kubelet[2754]: E0912 17:47:02.987673 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.987866 kubelet[2754]: E0912 17:47:02.987853 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.987866 kubelet[2754]: W0912 17:47:02.987862 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.987933 kubelet[2754]: E0912 17:47:02.987871 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.988078 kubelet[2754]: E0912 17:47:02.988065 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.988078 kubelet[2754]: W0912 17:47:02.988074 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.988135 kubelet[2754]: E0912 17:47:02.988082 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.988252 kubelet[2754]: E0912 17:47:02.988240 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.988283 kubelet[2754]: W0912 17:47:02.988251 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.988283 kubelet[2754]: E0912 17:47:02.988259 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.988430 kubelet[2754]: E0912 17:47:02.988418 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.988456 kubelet[2754]: W0912 17:47:02.988429 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.988456 kubelet[2754]: E0912 17:47:02.988437 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.988621 kubelet[2754]: E0912 17:47:02.988605 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.988621 kubelet[2754]: W0912 17:47:02.988614 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.988621 kubelet[2754]: E0912 17:47:02.988622 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.988812 kubelet[2754]: E0912 17:47:02.988799 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.988812 kubelet[2754]: W0912 17:47:02.988808 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.988812 kubelet[2754]: E0912 17:47:02.988816 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.989002 kubelet[2754]: E0912 17:47:02.988990 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.989002 kubelet[2754]: W0912 17:47:02.988999 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.989075 kubelet[2754]: E0912 17:47:02.989006 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.989244 kubelet[2754]: E0912 17:47:02.989212 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.989244 kubelet[2754]: W0912 17:47:02.989225 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.989244 kubelet[2754]: E0912 17:47:02.989236 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.989458 kubelet[2754]: E0912 17:47:02.989419 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.989458 kubelet[2754]: W0912 17:47:02.989427 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.989458 kubelet[2754]: E0912 17:47:02.989436 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.989652 kubelet[2754]: E0912 17:47:02.989629 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.989652 kubelet[2754]: W0912 17:47:02.989641 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.989721 kubelet[2754]: E0912 17:47:02.989656 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.989852 kubelet[2754]: E0912 17:47:02.989837 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.989852 kubelet[2754]: W0912 17:47:02.989848 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.989955 kubelet[2754]: E0912 17:47:02.989856 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.990035 kubelet[2754]: E0912 17:47:02.990020 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.990035 kubelet[2754]: W0912 17:47:02.990030 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.990081 kubelet[2754]: E0912 17:47:02.990038 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:02.990268 kubelet[2754]: E0912 17:47:02.990245 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:02.990312 kubelet[2754]: W0912 17:47:02.990266 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:02.990312 kubelet[2754]: E0912 17:47:02.990289 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.080391 kubelet[2754]: E0912 17:47:03.080351 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.080391 kubelet[2754]: W0912 17:47:03.080369 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.080391 kubelet[2754]: E0912 17:47:03.080384 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.080627 kubelet[2754]: E0912 17:47:03.080603 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.080627 kubelet[2754]: W0912 17:47:03.080616 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.080627 kubelet[2754]: E0912 17:47:03.080628 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.080891 kubelet[2754]: E0912 17:47:03.080867 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.080891 kubelet[2754]: W0912 17:47:03.080880 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.080891 kubelet[2754]: E0912 17:47:03.080890 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.081095 kubelet[2754]: E0912 17:47:03.081072 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.081095 kubelet[2754]: W0912 17:47:03.081084 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.081095 kubelet[2754]: E0912 17:47:03.081092 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.081262 kubelet[2754]: E0912 17:47:03.081248 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.081262 kubelet[2754]: W0912 17:47:03.081258 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.081306 kubelet[2754]: E0912 17:47:03.081266 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.081472 kubelet[2754]: E0912 17:47:03.081450 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.081472 kubelet[2754]: W0912 17:47:03.081462 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.081472 kubelet[2754]: E0912 17:47:03.081470 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.081817 kubelet[2754]: E0912 17:47:03.081794 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.081863 kubelet[2754]: W0912 17:47:03.081811 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.081863 kubelet[2754]: E0912 17:47:03.081834 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.082054 kubelet[2754]: E0912 17:47:03.082037 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.082054 kubelet[2754]: W0912 17:47:03.082050 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.082113 kubelet[2754]: E0912 17:47:03.082059 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.082246 kubelet[2754]: E0912 17:47:03.082231 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.082246 kubelet[2754]: W0912 17:47:03.082242 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.082292 kubelet[2754]: E0912 17:47:03.082250 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.082436 kubelet[2754]: E0912 17:47:03.082420 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.082436 kubelet[2754]: W0912 17:47:03.082430 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.082498 kubelet[2754]: E0912 17:47:03.082438 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.082685 kubelet[2754]: E0912 17:47:03.082666 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.082685 kubelet[2754]: W0912 17:47:03.082679 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.082753 kubelet[2754]: E0912 17:47:03.082687 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.082915 kubelet[2754]: E0912 17:47:03.082898 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.082915 kubelet[2754]: W0912 17:47:03.082910 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.082986 kubelet[2754]: E0912 17:47:03.082920 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.083108 kubelet[2754]: E0912 17:47:03.083093 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.083143 kubelet[2754]: W0912 17:47:03.083111 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.083143 kubelet[2754]: E0912 17:47:03.083120 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.083289 kubelet[2754]: E0912 17:47:03.083275 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.083289 kubelet[2754]: W0912 17:47:03.083285 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.083347 kubelet[2754]: E0912 17:47:03.083293 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.083479 kubelet[2754]: E0912 17:47:03.083464 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.083479 kubelet[2754]: W0912 17:47:03.083475 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.083527 kubelet[2754]: E0912 17:47:03.083483 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.083765 kubelet[2754]: E0912 17:47:03.083741 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.083798 kubelet[2754]: W0912 17:47:03.083765 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.083798 kubelet[2754]: E0912 17:47:03.083793 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.084056 kubelet[2754]: E0912 17:47:03.084039 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.084056 kubelet[2754]: W0912 17:47:03.084051 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.084109 kubelet[2754]: E0912 17:47:03.084060 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.084250 kubelet[2754]: E0912 17:47:03.084233 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.084250 kubelet[2754]: W0912 17:47:03.084244 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.084319 kubelet[2754]: E0912 17:47:03.084254 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.906382 kubelet[2754]: E0912 17:47:03.906301 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:03.978287 kubelet[2754]: I0912 17:47:03.978239 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:47:03.978698 kubelet[2754]: E0912 17:47:03.978648 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:03.996369 kubelet[2754]: E0912 17:47:03.996332 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.996369 kubelet[2754]: W0912 17:47:03.996360 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.996369 kubelet[2754]: E0912 17:47:03.996383 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.996575 kubelet[2754]: E0912 17:47:03.996559 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.996575 kubelet[2754]: W0912 17:47:03.996570 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.996650 kubelet[2754]: E0912 17:47:03.996578 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.996786 kubelet[2754]: E0912 17:47:03.996771 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.996786 kubelet[2754]: W0912 17:47:03.996781 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.996852 kubelet[2754]: E0912 17:47:03.996790 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.996985 kubelet[2754]: E0912 17:47:03.996961 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.996985 kubelet[2754]: W0912 17:47:03.996973 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.996985 kubelet[2754]: E0912 17:47:03.996981 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.997175 kubelet[2754]: E0912 17:47:03.997159 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.997175 kubelet[2754]: W0912 17:47:03.997170 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.997223 kubelet[2754]: E0912 17:47:03.997179 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.997355 kubelet[2754]: E0912 17:47:03.997341 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.997355 kubelet[2754]: W0912 17:47:03.997351 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.997417 kubelet[2754]: E0912 17:47:03.997359 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.997540 kubelet[2754]: E0912 17:47:03.997524 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.997540 kubelet[2754]: W0912 17:47:03.997534 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.997540 kubelet[2754]: E0912 17:47:03.997542 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.997744 kubelet[2754]: E0912 17:47:03.997715 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.997744 kubelet[2754]: W0912 17:47:03.997741 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.997790 kubelet[2754]: E0912 17:47:03.997750 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.997953 kubelet[2754]: E0912 17:47:03.997936 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.997986 kubelet[2754]: W0912 17:47:03.997946 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.997986 kubelet[2754]: E0912 17:47:03.997973 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.998162 kubelet[2754]: E0912 17:47:03.998147 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.998162 kubelet[2754]: W0912 17:47:03.998156 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.998205 kubelet[2754]: E0912 17:47:03.998166 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.998343 kubelet[2754]: E0912 17:47:03.998329 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.998343 kubelet[2754]: W0912 17:47:03.998339 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.998402 kubelet[2754]: E0912 17:47:03.998348 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.998527 kubelet[2754]: E0912 17:47:03.998513 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.998527 kubelet[2754]: W0912 17:47:03.998525 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.998574 kubelet[2754]: E0912 17:47:03.998534 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.998749 kubelet[2754]: E0912 17:47:03.998718 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.998749 kubelet[2754]: W0912 17:47:03.998745 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.998810 kubelet[2754]: E0912 17:47:03.998754 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.998935 kubelet[2754]: E0912 17:47:03.998920 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.998935 kubelet[2754]: W0912 17:47:03.998930 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.999000 kubelet[2754]: E0912 17:47:03.998938 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:03.999133 kubelet[2754]: E0912 17:47:03.999116 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:03.999133 kubelet[2754]: W0912 17:47:03.999126 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:03.999187 kubelet[2754]: E0912 17:47:03.999134 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.088883 kubelet[2754]: E0912 17:47:04.088850 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.088883 kubelet[2754]: W0912 17:47:04.088868 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.088883 kubelet[2754]: E0912 17:47:04.088886 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.089124 kubelet[2754]: E0912 17:47:04.089084 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.089124 kubelet[2754]: W0912 17:47:04.089103 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.089124 kubelet[2754]: E0912 17:47:04.089115 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.089342 kubelet[2754]: E0912 17:47:04.089322 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.089342 kubelet[2754]: W0912 17:47:04.089339 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.089432 kubelet[2754]: E0912 17:47:04.089352 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.089604 kubelet[2754]: E0912 17:47:04.089588 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.089604 kubelet[2754]: W0912 17:47:04.089600 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.089715 kubelet[2754]: E0912 17:47:04.089609 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.089807 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.090835 kubelet[2754]: W0912 17:47:04.089816 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.089824 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.090011 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.090835 kubelet[2754]: W0912 17:47:04.090019 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.090027 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.090304 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.090835 kubelet[2754]: W0912 17:47:04.090316 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.090328 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.090835 kubelet[2754]: E0912 17:47:04.090522 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091152 kubelet[2754]: W0912 17:47:04.090530 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.090539 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.090696 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091152 kubelet[2754]: W0912 17:47:04.090703 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.090711 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.090861 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091152 kubelet[2754]: W0912 17:47:04.090868 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.090876 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.091152 kubelet[2754]: E0912 17:47:04.091038 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091152 kubelet[2754]: W0912 17:47:04.091076 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091519 kubelet[2754]: E0912 17:47:04.091086 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.091519 kubelet[2754]: E0912 17:47:04.091390 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091519 kubelet[2754]: W0912 17:47:04.091453 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091519 kubelet[2754]: E0912 17:47:04.091466 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.091853 kubelet[2754]: E0912 17:47:04.091834 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.091853 kubelet[2754]: W0912 17:47:04.091850 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.091934 kubelet[2754]: E0912 17:47:04.091863 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.092102 kubelet[2754]: E0912 17:47:04.092088 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.092102 kubelet[2754]: W0912 17:47:04.092100 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.092170 kubelet[2754]: E0912 17:47:04.092122 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.092328 kubelet[2754]: E0912 17:47:04.092314 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.092328 kubelet[2754]: W0912 17:47:04.092325 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.092377 kubelet[2754]: E0912 17:47:04.092335 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.092572 kubelet[2754]: E0912 17:47:04.092558 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.092572 kubelet[2754]: W0912 17:47:04.092570 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.092640 kubelet[2754]: E0912 17:47:04.092580 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.092987 kubelet[2754]: E0912 17:47:04.092919 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.092987 kubelet[2754]: W0912 17:47:04.092933 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.092987 kubelet[2754]: E0912 17:47:04.092946 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.093156 kubelet[2754]: E0912 17:47:04.093140 2754 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:47:04.093156 kubelet[2754]: W0912 17:47:04.093154 2754 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:47:04.093199 kubelet[2754]: E0912 17:47:04.093166 2754 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:47:04.242209 containerd[1597]: time="2025-09-12T17:47:04.242072806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:04.244421 containerd[1597]: time="2025-09-12T17:47:04.244392344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:47:04.245703 containerd[1597]: time="2025-09-12T17:47:04.245670922Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:04.247679 containerd[1597]: time="2025-09-12T17:47:04.247622919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:04.248236 containerd[1597]: time="2025-09-12T17:47:04.248191641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.2124798s" Sep 12 17:47:04.248236 containerd[1597]: time="2025-09-12T17:47:04.248222458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:47:04.252864 containerd[1597]: time="2025-09-12T17:47:04.252831138Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:47:04.262275 containerd[1597]: time="2025-09-12T17:47:04.262238788Z" level=info msg="Container 2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:04.275229 containerd[1597]: time="2025-09-12T17:47:04.275177848Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\"" Sep 12 17:47:04.275904 containerd[1597]: time="2025-09-12T17:47:04.275698839Z" level=info msg="StartContainer for \"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\"" Sep 12 17:47:04.277180 containerd[1597]: time="2025-09-12T17:47:04.277142498Z" level=info msg="connecting to shim 2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b" address="unix:///run/containerd/s/901653f2269fa7c8d8b5e99b08bfc4e996e1ed11cc363c742cfc7ca0433368e4" protocol=ttrpc version=3 Sep 12 17:47:04.300033 systemd[1]: Started cri-containerd-2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b.scope - libcontainer container 2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b. Sep 12 17:47:04.346070 containerd[1597]: time="2025-09-12T17:47:04.346024035Z" level=info msg="StartContainer for \"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\" returns successfully" Sep 12 17:47:04.358026 systemd[1]: cri-containerd-2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b.scope: Deactivated successfully. Sep 12 17:47:04.359885 containerd[1597]: time="2025-09-12T17:47:04.359849825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\" id:\"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\" pid:3576 exited_at:{seconds:1757699224 nanos:359469469}" Sep 12 17:47:04.359885 containerd[1597]: time="2025-09-12T17:47:04.359856337Z" level=info msg="received exit event container_id:\"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\" id:\"2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b\" pid:3576 exited_at:{seconds:1757699224 nanos:359469469}" Sep 12 17:47:04.387538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c16b14636a174fb8917f8da087ca9ad628c13c0fa645e0893e1eaaa6f63171b-rootfs.mount: Deactivated successfully. Sep 12 17:47:04.983944 containerd[1597]: time="2025-09-12T17:47:04.983602645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:47:04.998957 kubelet[2754]: I0912 17:47:04.998863 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f49949f5f-jdxwh" podStartSLOduration=4.674414601 podStartE2EDuration="8.998836808s" podCreationTimestamp="2025-09-12 17:46:56 +0000 UTC" firstStartedPulling="2025-09-12 17:46:57.711079687 +0000 UTC m=+20.888702650" lastFinishedPulling="2025-09-12 17:47:02.035501884 +0000 UTC m=+25.213124857" observedRunningTime="2025-09-12 17:47:02.984108343 +0000 UTC m=+26.161731316" watchObservedRunningTime="2025-09-12 17:47:04.998836808 +0000 UTC m=+28.176459781" Sep 12 17:47:05.906436 kubelet[2754]: E0912 17:47:05.906363 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:07.907093 kubelet[2754]: E0912 17:47:07.906063 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:08.237611 containerd[1597]: time="2025-09-12T17:47:08.237471746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:08.238583 containerd[1597]: time="2025-09-12T17:47:08.238366319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:47:08.240099 containerd[1597]: time="2025-09-12T17:47:08.240060026Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:08.242462 containerd[1597]: time="2025-09-12T17:47:08.242423313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:08.242965 containerd[1597]: time="2025-09-12T17:47:08.242931960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.25926829s" Sep 12 17:47:08.242965 containerd[1597]: time="2025-09-12T17:47:08.242962347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:47:08.249286 containerd[1597]: time="2025-09-12T17:47:08.249248625Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:47:08.261480 containerd[1597]: time="2025-09-12T17:47:08.261422360Z" level=info msg="Container 182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:08.273537 containerd[1597]: time="2025-09-12T17:47:08.273463115Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\"" Sep 12 17:47:08.274141 containerd[1597]: time="2025-09-12T17:47:08.274097990Z" level=info msg="StartContainer for \"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\"" Sep 12 17:47:08.275668 containerd[1597]: time="2025-09-12T17:47:08.275640102Z" level=info msg="connecting to shim 182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9" address="unix:///run/containerd/s/901653f2269fa7c8d8b5e99b08bfc4e996e1ed11cc363c742cfc7ca0433368e4" protocol=ttrpc version=3 Sep 12 17:47:08.304874 systemd[1]: Started cri-containerd-182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9.scope - libcontainer container 182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9. Sep 12 17:47:08.351443 containerd[1597]: time="2025-09-12T17:47:08.351377505Z" level=info msg="StartContainer for \"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\" returns successfully" Sep 12 17:47:09.373614 systemd[1]: cri-containerd-182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9.scope: Deactivated successfully. Sep 12 17:47:09.373989 systemd[1]: cri-containerd-182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9.scope: Consumed 626ms CPU time, 178.7M memory peak, 2.6M read from disk, 171.3M written to disk. Sep 12 17:47:09.375274 containerd[1597]: time="2025-09-12T17:47:09.375212562Z" level=info msg="received exit event container_id:\"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\" id:\"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\" pid:3636 exited_at:{seconds:1757699229 nanos:374980936}" Sep 12 17:47:09.375760 containerd[1597]: time="2025-09-12T17:47:09.375362284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\" id:\"182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9\" pid:3636 exited_at:{seconds:1757699229 nanos:374980936}" Sep 12 17:47:09.400552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-182926b28debf1ee027bcdb067ac3c91d42426fa66dd40400323e0a292bd51e9-rootfs.mount: Deactivated successfully. Sep 12 17:47:09.421690 kubelet[2754]: I0912 17:47:09.421583 2754 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:47:09.709945 systemd[1]: Created slice kubepods-burstable-pod8247fe61_fcea_4419_acd7_3eb1ca909f32.slice - libcontainer container kubepods-burstable-pod8247fe61_fcea_4419_acd7_3eb1ca909f32.slice. Sep 12 17:47:09.785545 kubelet[2754]: I0912 17:47:09.785474 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8247fe61-fcea-4419-acd7-3eb1ca909f32-config-volume\") pod \"coredns-674b8bbfcf-q4bpf\" (UID: \"8247fe61-fcea-4419-acd7-3eb1ca909f32\") " pod="kube-system/coredns-674b8bbfcf-q4bpf" Sep 12 17:47:09.785545 kubelet[2754]: I0912 17:47:09.785524 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vcv\" (UniqueName: \"kubernetes.io/projected/8247fe61-fcea-4419-acd7-3eb1ca909f32-kube-api-access-d2vcv\") pod \"coredns-674b8bbfcf-q4bpf\" (UID: \"8247fe61-fcea-4419-acd7-3eb1ca909f32\") " pod="kube-system/coredns-674b8bbfcf-q4bpf" Sep 12 17:47:09.821539 systemd[1]: Created slice kubepods-besteffort-pod820b7569_1a21_437a_9be7_3d6ea66544ce.slice - libcontainer container kubepods-besteffort-pod820b7569_1a21_437a_9be7_3d6ea66544ce.slice. Sep 12 17:47:09.886896 kubelet[2754]: I0912 17:47:09.886802 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/820b7569-1a21-437a-9be7-3d6ea66544ce-calico-apiserver-certs\") pod \"calico-apiserver-59484845b9-r62pg\" (UID: \"820b7569-1a21-437a-9be7-3d6ea66544ce\") " pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" Sep 12 17:47:09.886896 kubelet[2754]: I0912 17:47:09.886860 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l6j8\" (UniqueName: \"kubernetes.io/projected/820b7569-1a21-437a-9be7-3d6ea66544ce-kube-api-access-4l6j8\") pod \"calico-apiserver-59484845b9-r62pg\" (UID: \"820b7569-1a21-437a-9be7-3d6ea66544ce\") " pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" Sep 12 17:47:09.933073 systemd[1]: Created slice kubepods-besteffort-pode24c2194_063a_4aa0_a000_c103a3e25873.slice - libcontainer container kubepods-besteffort-pode24c2194_063a_4aa0_a000_c103a3e25873.slice. Sep 12 17:47:09.942355 systemd[1]: Created slice kubepods-besteffort-pod5e30145f_893a_4aac_8f16_6ea98bb69360.slice - libcontainer container kubepods-besteffort-pod5e30145f_893a_4aac_8f16_6ea98bb69360.slice. Sep 12 17:47:09.947084 systemd[1]: Created slice kubepods-burstable-podbfd2ea8b_8830_43b1_b23b_cc1ff8873bda.slice - libcontainer container kubepods-burstable-podbfd2ea8b_8830_43b1_b23b_cc1ff8873bda.slice. Sep 12 17:47:09.954396 systemd[1]: Created slice kubepods-besteffort-pod4751d4d4_5f80_4998_819d_56df18df6e6d.slice - libcontainer container kubepods-besteffort-pod4751d4d4_5f80_4998_819d_56df18df6e6d.slice. Sep 12 17:47:09.962590 systemd[1]: Created slice kubepods-besteffort-podb604b9a3_e2b4_42d6_a4ad_7a0abbdc8585.slice - libcontainer container kubepods-besteffort-podb604b9a3_e2b4_42d6_a4ad_7a0abbdc8585.slice. Sep 12 17:47:09.968867 systemd[1]: Created slice kubepods-besteffort-pod9e0c15c0_7f27_4581_beb0_d93114983a4f.slice - libcontainer container kubepods-besteffort-pod9e0c15c0_7f27_4581_beb0_d93114983a4f.slice. Sep 12 17:47:09.972825 containerd[1597]: time="2025-09-12T17:47:09.972773714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vzb4,Uid:9e0c15c0-7f27-4581-beb0-d93114983a4f,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:09.987464 kubelet[2754]: I0912 17:47:09.987406 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5e30145f-893a-4aac-8f16-6ea98bb69360-goldmane-key-pair\") pod \"goldmane-54d579b49d-7swhj\" (UID: \"5e30145f-893a-4aac-8f16-6ea98bb69360\") " pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:09.987644 kubelet[2754]: I0912 17:47:09.987459 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-backend-key-pair\") pod \"whisker-6fc647956-zp6wr\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " pod="calico-system/whisker-6fc647956-zp6wr" Sep 12 17:47:09.987687 kubelet[2754]: I0912 17:47:09.987612 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585-tigera-ca-bundle\") pod \"calico-kube-controllers-7996b8f8fb-fs4k5\" (UID: \"b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585\") " pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" Sep 12 17:47:09.987767 kubelet[2754]: I0912 17:47:09.987748 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e30145f-893a-4aac-8f16-6ea98bb69360-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-7swhj\" (UID: \"5e30145f-893a-4aac-8f16-6ea98bb69360\") " pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:09.987819 kubelet[2754]: I0912 17:47:09.987780 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkrpg\" (UniqueName: \"kubernetes.io/projected/bfd2ea8b-8830-43b1-b23b-cc1ff8873bda-kube-api-access-xkrpg\") pod \"coredns-674b8bbfcf-fhjwx\" (UID: \"bfd2ea8b-8830-43b1-b23b-cc1ff8873bda\") " pod="kube-system/coredns-674b8bbfcf-fhjwx" Sep 12 17:47:09.987857 kubelet[2754]: I0912 17:47:09.987841 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78stb\" (UniqueName: \"kubernetes.io/projected/5e30145f-893a-4aac-8f16-6ea98bb69360-kube-api-access-78stb\") pod \"goldmane-54d579b49d-7swhj\" (UID: \"5e30145f-893a-4aac-8f16-6ea98bb69360\") " pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:09.987910 kubelet[2754]: I0912 17:47:09.987895 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrss\" (UniqueName: \"kubernetes.io/projected/4751d4d4-5f80-4998-819d-56df18df6e6d-kube-api-access-dcrss\") pod \"whisker-6fc647956-zp6wr\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " pod="calico-system/whisker-6fc647956-zp6wr" Sep 12 17:47:09.987940 kubelet[2754]: I0912 17:47:09.987926 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6p45\" (UniqueName: \"kubernetes.io/projected/e24c2194-063a-4aa0-a000-c103a3e25873-kube-api-access-n6p45\") pod \"calico-apiserver-59484845b9-ck2cf\" (UID: \"e24c2194-063a-4aa0-a000-c103a3e25873\") " pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" Sep 12 17:47:09.987992 kubelet[2754]: I0912 17:47:09.987975 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e30145f-893a-4aac-8f16-6ea98bb69360-config\") pod \"goldmane-54d579b49d-7swhj\" (UID: \"5e30145f-893a-4aac-8f16-6ea98bb69360\") " pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:09.988019 kubelet[2754]: I0912 17:47:09.988005 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxx9\" (UniqueName: \"kubernetes.io/projected/b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585-kube-api-access-bsxx9\") pod \"calico-kube-controllers-7996b8f8fb-fs4k5\" (UID: \"b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585\") " pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" Sep 12 17:47:09.988072 kubelet[2754]: I0912 17:47:09.988053 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-ca-bundle\") pod \"whisker-6fc647956-zp6wr\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " pod="calico-system/whisker-6fc647956-zp6wr" Sep 12 17:47:09.988099 kubelet[2754]: I0912 17:47:09.988084 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e24c2194-063a-4aa0-a000-c103a3e25873-calico-apiserver-certs\") pod \"calico-apiserver-59484845b9-ck2cf\" (UID: \"e24c2194-063a-4aa0-a000-c103a3e25873\") " pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" Sep 12 17:47:09.988165 kubelet[2754]: I0912 17:47:09.988136 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfd2ea8b-8830-43b1-b23b-cc1ff8873bda-config-volume\") pod \"coredns-674b8bbfcf-fhjwx\" (UID: \"bfd2ea8b-8830-43b1-b23b-cc1ff8873bda\") " pod="kube-system/coredns-674b8bbfcf-fhjwx" Sep 12 17:47:10.007061 containerd[1597]: time="2025-09-12T17:47:10.007016016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:47:10.012955 kubelet[2754]: E0912 17:47:10.012905 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:10.014873 containerd[1597]: time="2025-09-12T17:47:10.014718032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q4bpf,Uid:8247fe61-fcea-4419-acd7-3eb1ca909f32,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:10.062711 containerd[1597]: time="2025-09-12T17:47:10.062649920Z" level=error msg="Failed to destroy network for sandbox \"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.066113 containerd[1597]: time="2025-09-12T17:47:10.066053461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vzb4,Uid:9e0c15c0-7f27-4581-beb0-d93114983a4f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.073987 kubelet[2754]: E0912 17:47:10.073904 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.074256 kubelet[2754]: E0912 17:47:10.074017 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:47:10.074256 kubelet[2754]: E0912 17:47:10.074165 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vzb4" Sep 12 17:47:10.074451 kubelet[2754]: E0912 17:47:10.074240 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9vzb4_calico-system(9e0c15c0-7f27-4581-beb0-d93114983a4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9vzb4_calico-system(9e0c15c0-7f27-4581-beb0-d93114983a4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5dd34a9cf6533a436e694bad48908b1d1f3baf6850eb2c20a470e05d2fc8431\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vzb4" podUID="9e0c15c0-7f27-4581-beb0-d93114983a4f" Sep 12 17:47:10.079964 containerd[1597]: time="2025-09-12T17:47:10.079914820Z" level=error msg="Failed to destroy network for sandbox \"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.081128 containerd[1597]: time="2025-09-12T17:47:10.081095862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q4bpf,Uid:8247fe61-fcea-4419-acd7-3eb1ca909f32,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.081367 kubelet[2754]: E0912 17:47:10.081325 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.081431 kubelet[2754]: E0912 17:47:10.081386 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q4bpf" Sep 12 17:47:10.081431 kubelet[2754]: E0912 17:47:10.081408 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q4bpf" Sep 12 17:47:10.081493 kubelet[2754]: E0912 17:47:10.081460 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q4bpf_kube-system(8247fe61-fcea-4419-acd7-3eb1ca909f32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q4bpf_kube-system(8247fe61-fcea-4419-acd7-3eb1ca909f32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d62488f759e3eda032cb6f77393ca13a108ef903f460b7a92ed1db4d5d261933\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q4bpf" podUID="8247fe61-fcea-4419-acd7-3eb1ca909f32" Sep 12 17:47:10.125290 containerd[1597]: time="2025-09-12T17:47:10.124976160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-r62pg,Uid:820b7569-1a21-437a-9be7-3d6ea66544ce,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:47:10.238462 containerd[1597]: time="2025-09-12T17:47:10.238331473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-ck2cf,Uid:e24c2194-063a-4aa0-a000-c103a3e25873,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:47:10.246167 containerd[1597]: time="2025-09-12T17:47:10.246111135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7swhj,Uid:5e30145f-893a-4aac-8f16-6ea98bb69360,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:10.251500 kubelet[2754]: E0912 17:47:10.251451 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:10.252057 containerd[1597]: time="2025-09-12T17:47:10.252026249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhjwx,Uid:bfd2ea8b-8830-43b1-b23b-cc1ff8873bda,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:10.258225 containerd[1597]: time="2025-09-12T17:47:10.258196585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc647956-zp6wr,Uid:4751d4d4-5f80-4998-819d-56df18df6e6d,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:10.266953 containerd[1597]: time="2025-09-12T17:47:10.266924379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7996b8f8fb-fs4k5,Uid:b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:10.427680 systemd[1]: run-netns-cni\x2d399cbea4\x2d7ce9\x2d2c79\x2d9c4d\x2dded506bfe860.mount: Deactivated successfully. Sep 12 17:47:10.469776 containerd[1597]: time="2025-09-12T17:47:10.469640708Z" level=error msg="Failed to destroy network for sandbox \"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.473033 systemd[1]: run-netns-cni\x2dea6f3e72\x2d46aa\x2d8962\x2dc3a4\x2d5ce9c7048629.mount: Deactivated successfully. Sep 12 17:47:10.474027 containerd[1597]: time="2025-09-12T17:47:10.473981582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-r62pg,Uid:820b7569-1a21-437a-9be7-3d6ea66544ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.474908 kubelet[2754]: E0912 17:47:10.474486 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.474908 kubelet[2754]: E0912 17:47:10.474827 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" Sep 12 17:47:10.474908 kubelet[2754]: E0912 17:47:10.474861 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" Sep 12 17:47:10.476553 kubelet[2754]: E0912 17:47:10.475469 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59484845b9-r62pg_calico-apiserver(820b7569-1a21-437a-9be7-3d6ea66544ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59484845b9-r62pg_calico-apiserver(820b7569-1a21-437a-9be7-3d6ea66544ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4272fe04145d84851c3fb319df5afcb743897464bca2dde027560d3e31c1862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" podUID="820b7569-1a21-437a-9be7-3d6ea66544ce" Sep 12 17:47:10.504616 containerd[1597]: time="2025-09-12T17:47:10.504473260Z" level=error msg="Failed to destroy network for sandbox \"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.507360 systemd[1]: run-netns-cni\x2d61b251dc\x2dc81c\x2dd4f1\x2d6269\x2d92e3287827a1.mount: Deactivated successfully. Sep 12 17:47:10.507551 containerd[1597]: time="2025-09-12T17:47:10.507350210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-ck2cf,Uid:e24c2194-063a-4aa0-a000-c103a3e25873,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.508040 kubelet[2754]: E0912 17:47:10.507928 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.508040 kubelet[2754]: E0912 17:47:10.508008 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" Sep 12 17:47:10.508195 kubelet[2754]: E0912 17:47:10.508154 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" Sep 12 17:47:10.508313 kubelet[2754]: E0912 17:47:10.508254 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59484845b9-ck2cf_calico-apiserver(e24c2194-063a-4aa0-a000-c103a3e25873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59484845b9-ck2cf_calico-apiserver(e24c2194-063a-4aa0-a000-c103a3e25873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9de8dbc849e0cb32b1cdcb14b0c253ff98d2a1021c1093aa1adc499f469418ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" podUID="e24c2194-063a-4aa0-a000-c103a3e25873" Sep 12 17:47:10.509438 containerd[1597]: time="2025-09-12T17:47:10.509378345Z" level=error msg="Failed to destroy network for sandbox \"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.510957 containerd[1597]: time="2025-09-12T17:47:10.510857106Z" level=error msg="Failed to destroy network for sandbox \"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.511802 systemd[1]: run-netns-cni\x2d831b421e\x2dfa75\x2d249a\x2de06f\x2dd3b38db2b04e.mount: Deactivated successfully. Sep 12 17:47:10.515582 containerd[1597]: time="2025-09-12T17:47:10.515444704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhjwx,Uid:bfd2ea8b-8830-43b1-b23b-cc1ff8873bda,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.516220 systemd[1]: run-netns-cni\x2d57dbf481\x2d26ae\x2dc3af\x2d9967\x2dd7c13a4795b1.mount: Deactivated successfully. Sep 12 17:47:10.517643 kubelet[2754]: E0912 17:47:10.516326 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.517643 kubelet[2754]: E0912 17:47:10.516373 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhjwx" Sep 12 17:47:10.517643 kubelet[2754]: E0912 17:47:10.516392 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhjwx" Sep 12 17:47:10.517760 kubelet[2754]: E0912 17:47:10.516435 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fhjwx_kube-system(bfd2ea8b-8830-43b1-b23b-cc1ff8873bda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fhjwx_kube-system(bfd2ea8b-8830-43b1-b23b-cc1ff8873bda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38030dc5b71cf6b850512818c61f94e1db0de5294e5472f4ef093e1d3bab25ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fhjwx" podUID="bfd2ea8b-8830-43b1-b23b-cc1ff8873bda" Sep 12 17:47:10.519562 containerd[1597]: time="2025-09-12T17:47:10.519498718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fc647956-zp6wr,Uid:4751d4d4-5f80-4998-819d-56df18df6e6d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.520135 kubelet[2754]: E0912 17:47:10.519836 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.520135 kubelet[2754]: E0912 17:47:10.519885 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fc647956-zp6wr" Sep 12 17:47:10.520135 kubelet[2754]: E0912 17:47:10.519903 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fc647956-zp6wr" Sep 12 17:47:10.520240 kubelet[2754]: E0912 17:47:10.519951 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fc647956-zp6wr_calico-system(4751d4d4-5f80-4998-819d-56df18df6e6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fc647956-zp6wr_calico-system(4751d4d4-5f80-4998-819d-56df18df6e6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb788b4bc0dbe0b2ccf444761219165a18e0729689a5d98a4cd3546c914f9e7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fc647956-zp6wr" podUID="4751d4d4-5f80-4998-819d-56df18df6e6d" Sep 12 17:47:10.522086 containerd[1597]: time="2025-09-12T17:47:10.521956592Z" level=error msg="Failed to destroy network for sandbox \"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.523340 containerd[1597]: time="2025-09-12T17:47:10.523297042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7swhj,Uid:5e30145f-893a-4aac-8f16-6ea98bb69360,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.523544 kubelet[2754]: E0912 17:47:10.523495 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.523608 kubelet[2754]: E0912 17:47:10.523552 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:10.523608 kubelet[2754]: E0912 17:47:10.523569 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7swhj" Sep 12 17:47:10.523676 kubelet[2754]: E0912 17:47:10.523609 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7swhj_calico-system(5e30145f-893a-4aac-8f16-6ea98bb69360)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7swhj_calico-system(5e30145f-893a-4aac-8f16-6ea98bb69360)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef80eb53a41ec00708d7b4a4e8e4e0a97766c4eed9c49f94271b4c6774855701\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7swhj" podUID="5e30145f-893a-4aac-8f16-6ea98bb69360" Sep 12 17:47:10.532626 containerd[1597]: time="2025-09-12T17:47:10.532572297Z" level=error msg="Failed to destroy network for sandbox \"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.533884 containerd[1597]: time="2025-09-12T17:47:10.533842696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7996b8f8fb-fs4k5,Uid:b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.534077 kubelet[2754]: E0912 17:47:10.534042 2754 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:47:10.534133 kubelet[2754]: E0912 17:47:10.534094 2754 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" Sep 12 17:47:10.534133 kubelet[2754]: E0912 17:47:10.534128 2754 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" Sep 12 17:47:10.534201 kubelet[2754]: E0912 17:47:10.534173 2754 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7996b8f8fb-fs4k5_calico-system(b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7996b8f8fb-fs4k5_calico-system(b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"953a39378a6046eb808d09f1c3801703c385a058f215de3e1a7209c3f260b617\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" podUID="b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585" Sep 12 17:47:11.400975 systemd[1]: run-netns-cni\x2dd0dd7bef\x2dd874\x2d9ba8\x2dd949\x2da17c3d0547a7.mount: Deactivated successfully. Sep 12 17:47:11.401116 systemd[1]: run-netns-cni\x2d2a450c43\x2deacf\x2db721\x2db637\x2d94f267c0f5db.mount: Deactivated successfully. Sep 12 17:47:17.556098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355173989.mount: Deactivated successfully. Sep 12 17:47:18.268267 containerd[1597]: time="2025-09-12T17:47:18.268184813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:18.269019 containerd[1597]: time="2025-09-12T17:47:18.268963305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:47:18.278827 containerd[1597]: time="2025-09-12T17:47:18.278763615Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:18.280953 containerd[1597]: time="2025-09-12T17:47:18.280905829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:18.281431 containerd[1597]: time="2025-09-12T17:47:18.281393675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.274332193s" Sep 12 17:47:18.281431 containerd[1597]: time="2025-09-12T17:47:18.281425174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:47:18.305535 containerd[1597]: time="2025-09-12T17:47:18.305485230Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:47:18.317082 containerd[1597]: time="2025-09-12T17:47:18.317017021Z" level=info msg="Container 0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:18.360073 containerd[1597]: time="2025-09-12T17:47:18.359993169Z" level=info msg="CreateContainer within sandbox \"fb65536c94044db49b6ace65dabb3c8048fc38a99d400846d16222c26857d2bd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\"" Sep 12 17:47:18.360741 containerd[1597]: time="2025-09-12T17:47:18.360687252Z" level=info msg="StartContainer for \"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\"" Sep 12 17:47:18.362206 containerd[1597]: time="2025-09-12T17:47:18.362170398Z" level=info msg="connecting to shim 0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164" address="unix:///run/containerd/s/901653f2269fa7c8d8b5e99b08bfc4e996e1ed11cc363c742cfc7ca0433368e4" protocol=ttrpc version=3 Sep 12 17:47:18.420880 systemd[1]: Started cri-containerd-0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164.scope - libcontainer container 0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164. Sep 12 17:47:18.470982 containerd[1597]: time="2025-09-12T17:47:18.470939003Z" level=info msg="StartContainer for \"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" returns successfully" Sep 12 17:47:18.559952 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:47:18.560635 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:47:18.745415 kubelet[2754]: I0912 17:47:18.745288 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-ca-bundle\") pod \"4751d4d4-5f80-4998-819d-56df18df6e6d\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " Sep 12 17:47:18.745415 kubelet[2754]: I0912 17:47:18.745345 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-backend-key-pair\") pod \"4751d4d4-5f80-4998-819d-56df18df6e6d\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " Sep 12 17:47:18.745415 kubelet[2754]: I0912 17:47:18.745395 2754 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrss\" (UniqueName: \"kubernetes.io/projected/4751d4d4-5f80-4998-819d-56df18df6e6d-kube-api-access-dcrss\") pod \"4751d4d4-5f80-4998-819d-56df18df6e6d\" (UID: \"4751d4d4-5f80-4998-819d-56df18df6e6d\") " Sep 12 17:47:18.746466 kubelet[2754]: I0912 17:47:18.746439 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4751d4d4-5f80-4998-819d-56df18df6e6d" (UID: "4751d4d4-5f80-4998-819d-56df18df6e6d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:47:18.750707 kubelet[2754]: I0912 17:47:18.750615 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4751d4d4-5f80-4998-819d-56df18df6e6d-kube-api-access-dcrss" (OuterVolumeSpecName: "kube-api-access-dcrss") pod "4751d4d4-5f80-4998-819d-56df18df6e6d" (UID: "4751d4d4-5f80-4998-819d-56df18df6e6d"). InnerVolumeSpecName "kube-api-access-dcrss". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:47:18.752158 systemd[1]: var-lib-kubelet-pods-4751d4d4\x2d5f80\x2d4998\x2d819d\x2d56df18df6e6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddcrss.mount: Deactivated successfully. Sep 12 17:47:18.752291 systemd[1]: var-lib-kubelet-pods-4751d4d4\x2d5f80\x2d4998\x2d819d\x2d56df18df6e6d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:47:18.752845 kubelet[2754]: I0912 17:47:18.752799 2754 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4751d4d4-5f80-4998-819d-56df18df6e6d" (UID: "4751d4d4-5f80-4998-819d-56df18df6e6d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:47:18.846579 kubelet[2754]: I0912 17:47:18.846526 2754 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dcrss\" (UniqueName: \"kubernetes.io/projected/4751d4d4-5f80-4998-819d-56df18df6e6d-kube-api-access-dcrss\") on node \"localhost\" DevicePath \"\"" Sep 12 17:47:18.846579 kubelet[2754]: I0912 17:47:18.846563 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:47:18.846579 kubelet[2754]: I0912 17:47:18.846572 2754 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4751d4d4-5f80-4998-819d-56df18df6e6d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:47:18.919657 systemd[1]: Removed slice kubepods-besteffort-pod4751d4d4_5f80_4998_819d_56df18df6e6d.slice - libcontainer container kubepods-besteffort-pod4751d4d4_5f80_4998_819d_56df18df6e6d.slice. Sep 12 17:47:19.043518 kubelet[2754]: I0912 17:47:19.043047 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h8lxt" podStartSLOduration=3.8486812 podStartE2EDuration="23.043024006s" podCreationTimestamp="2025-09-12 17:46:56 +0000 UTC" firstStartedPulling="2025-09-12 17:46:59.087941733 +0000 UTC m=+22.265564696" lastFinishedPulling="2025-09-12 17:47:18.282284529 +0000 UTC m=+41.459907502" observedRunningTime="2025-09-12 17:47:19.04169507 +0000 UTC m=+42.219318053" watchObservedRunningTime="2025-09-12 17:47:19.043024006 +0000 UTC m=+42.220646979" Sep 12 17:47:19.420119 containerd[1597]: time="2025-09-12T17:47:19.420065091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" id:\"dd1a223ae538d2e58664bbb329553fcad1beb8ac3a7216c160efed58c3768e76\" pid:4031 exit_status:1 exited_at:{seconds:1757699239 nanos:418903039}" Sep 12 17:47:19.423299 systemd[1]: Created slice kubepods-besteffort-pod7aa66cae_1f74_4f57_80a3_9a1728fc9e88.slice - libcontainer container kubepods-besteffort-pod7aa66cae_1f74_4f57_80a3_9a1728fc9e88.slice. Sep 12 17:47:19.469408 kubelet[2754]: I0912 17:47:19.469365 2754 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:47:19.469824 kubelet[2754]: E0912 17:47:19.469794 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:19.550665 kubelet[2754]: I0912 17:47:19.550606 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7aa66cae-1f74-4f57-80a3-9a1728fc9e88-whisker-backend-key-pair\") pod \"whisker-fdd8fcf8c-xdb8c\" (UID: \"7aa66cae-1f74-4f57-80a3-9a1728fc9e88\") " pod="calico-system/whisker-fdd8fcf8c-xdb8c" Sep 12 17:47:19.550665 kubelet[2754]: I0912 17:47:19.550648 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aa66cae-1f74-4f57-80a3-9a1728fc9e88-whisker-ca-bundle\") pod \"whisker-fdd8fcf8c-xdb8c\" (UID: \"7aa66cae-1f74-4f57-80a3-9a1728fc9e88\") " pod="calico-system/whisker-fdd8fcf8c-xdb8c" Sep 12 17:47:19.550665 kubelet[2754]: I0912 17:47:19.550677 2754 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42kt\" (UniqueName: \"kubernetes.io/projected/7aa66cae-1f74-4f57-80a3-9a1728fc9e88-kube-api-access-h42kt\") pod \"whisker-fdd8fcf8c-xdb8c\" (UID: \"7aa66cae-1f74-4f57-80a3-9a1728fc9e88\") " pod="calico-system/whisker-fdd8fcf8c-xdb8c" Sep 12 17:47:19.727160 containerd[1597]: time="2025-09-12T17:47:19.727021051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd8fcf8c-xdb8c,Uid:7aa66cae-1f74-4f57-80a3-9a1728fc9e88,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:20.028254 kubelet[2754]: E0912 17:47:20.027930 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:20.125857 containerd[1597]: time="2025-09-12T17:47:20.125540241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" id:\"35e6fbbe83bd700f84ab0364c8615c709fcb0f93f8ccadbf0e92a386a6d0c56f\" pid:4192 exit_status:1 exited_at:{seconds:1757699240 nanos:124943039}" Sep 12 17:47:20.364298 systemd-networkd[1497]: cali74122519f14: Link UP Sep 12 17:47:20.365835 systemd-networkd[1497]: cali74122519f14: Gained carrier Sep 12 17:47:20.388645 containerd[1597]: 2025-09-12 17:47:20.052 [INFO][4155] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:47:20.388645 containerd[1597]: 2025-09-12 17:47:20.102 [INFO][4155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0 whisker-fdd8fcf8c- calico-system 7aa66cae-1f74-4f57-80a3-9a1728fc9e88 965 0 2025-09-12 17:47:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fdd8fcf8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-fdd8fcf8c-xdb8c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74122519f14 [] [] }} ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-" Sep 12 17:47:20.388645 containerd[1597]: 2025-09-12 17:47:20.102 [INFO][4155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.388645 containerd[1597]: 2025-09-12 17:47:20.180 [INFO][4207] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" HandleID="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Workload="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.181 [INFO][4207] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" HandleID="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Workload="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bf6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-fdd8fcf8c-xdb8c", "timestamp":"2025-09-12 17:47:20.180280033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.181 [INFO][4207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.181 [INFO][4207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.181 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.274 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" host="localhost" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.281 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.285 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.290 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.292 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:20.388931 containerd[1597]: 2025-09-12 17:47:20.292 [INFO][4207] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" host="localhost" Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.293 [INFO][4207] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00 Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.327 [INFO][4207] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" host="localhost" Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.350 [INFO][4207] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" host="localhost" Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.350 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" host="localhost" Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.350 [INFO][4207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:20.389139 containerd[1597]: 2025-09-12 17:47:20.350 [INFO][4207] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" HandleID="k8s-pod-network.b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Workload="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.389263 containerd[1597]: 2025-09-12 17:47:20.354 [INFO][4155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0", GenerateName:"whisker-fdd8fcf8c-", Namespace:"calico-system", SelfLink:"", UID:"7aa66cae-1f74-4f57-80a3-9a1728fc9e88", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 47, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fdd8fcf8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-fdd8fcf8c-xdb8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74122519f14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:20.389263 containerd[1597]: 2025-09-12 17:47:20.355 [INFO][4155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.389353 containerd[1597]: 2025-09-12 17:47:20.355 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74122519f14 ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.389353 containerd[1597]: 2025-09-12 17:47:20.364 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.389400 containerd[1597]: 2025-09-12 17:47:20.365 [INFO][4155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0", GenerateName:"whisker-fdd8fcf8c-", Namespace:"calico-system", SelfLink:"", UID:"7aa66cae-1f74-4f57-80a3-9a1728fc9e88", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 47, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fdd8fcf8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00", Pod:"whisker-fdd8fcf8c-xdb8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74122519f14", MAC:"a6:af:62:26:c1:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:20.389457 containerd[1597]: 2025-09-12 17:47:20.385 [INFO][4155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" Namespace="calico-system" Pod="whisker-fdd8fcf8c-xdb8c" WorkloadEndpoint="localhost-k8s-whisker--fdd8fcf8c--xdb8c-eth0" Sep 12 17:47:20.569545 systemd-networkd[1497]: vxlan.calico: Link UP Sep 12 17:47:20.569555 systemd-networkd[1497]: vxlan.calico: Gained carrier Sep 12 17:47:20.727069 containerd[1597]: time="2025-09-12T17:47:20.726994942Z" level=info msg="connecting to shim b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00" address="unix:///run/containerd/s/9212f2c346873ed277ae1d38062f3bcefe23bbf3797096a22815975acbd2881a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:20.753950 systemd[1]: Started cri-containerd-b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00.scope - libcontainer container b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00. Sep 12 17:47:20.769159 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:20.841158 containerd[1597]: time="2025-09-12T17:47:20.841108219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd8fcf8c-xdb8c,Uid:7aa66cae-1f74-4f57-80a3-9a1728fc9e88,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00\"" Sep 12 17:47:20.850966 containerd[1597]: time="2025-09-12T17:47:20.850917612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:47:20.906752 kubelet[2754]: E0912 17:47:20.906685 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:20.907197 containerd[1597]: time="2025-09-12T17:47:20.907152081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vzb4,Uid:9e0c15c0-7f27-4581-beb0-d93114983a4f,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:20.907330 containerd[1597]: time="2025-09-12T17:47:20.907152742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q4bpf,Uid:8247fe61-fcea-4419-acd7-3eb1ca909f32,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:20.908960 kubelet[2754]: I0912 17:47:20.908920 2754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4751d4d4-5f80-4998-819d-56df18df6e6d" path="/var/lib/kubelet/pods/4751d4d4-5f80-4998-819d-56df18df6e6d/volumes" Sep 12 17:47:21.127181 containerd[1597]: time="2025-09-12T17:47:21.127005254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" id:\"5eaf3e6e0f2849b2b3cbd36770e9c57fcdfa295c1c88b9c2a72db88942ed50ea\" pid:4364 exit_status:1 exited_at:{seconds:1757699241 nanos:126657812}" Sep 12 17:47:21.204548 systemd-networkd[1497]: cali07e6d66bc9f: Link UP Sep 12 17:47:21.205315 systemd-networkd[1497]: cali07e6d66bc9f: Gained carrier Sep 12 17:47:21.221119 containerd[1597]: 2025-09-12 17:47:21.140 [INFO][4378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0 coredns-674b8bbfcf- kube-system 8247fe61-fcea-4419-acd7-3eb1ca909f32 889 0 2025-09-12 17:46:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-q4bpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali07e6d66bc9f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-" Sep 12 17:47:21.221119 containerd[1597]: 2025-09-12 17:47:21.141 [INFO][4378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221119 containerd[1597]: 2025-09-12 17:47:21.170 [INFO][4405] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" HandleID="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Workload="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.170 [INFO][4405] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" HandleID="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Workload="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-q4bpf", "timestamp":"2025-09-12 17:47:21.170391256 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.170 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.170 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.170 [INFO][4405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.178 [INFO][4405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" host="localhost" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.182 [INFO][4405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.186 [INFO][4405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.188 [INFO][4405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.190 [INFO][4405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:21.221399 containerd[1597]: 2025-09-12 17:47:21.190 [INFO][4405] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" host="localhost" Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.191 [INFO][4405] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392 Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.195 [INFO][4405] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" host="localhost" Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.199 [INFO][4405] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" host="localhost" Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.199 [INFO][4405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" host="localhost" Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.199 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:21.221634 containerd[1597]: 2025-09-12 17:47:21.199 [INFO][4405] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" HandleID="k8s-pod-network.41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Workload="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221788 containerd[1597]: 2025-09-12 17:47:21.202 [INFO][4378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8247fe61-fcea-4419-acd7-3eb1ca909f32", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-q4bpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07e6d66bc9f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:21.221868 containerd[1597]: 2025-09-12 17:47:21.202 [INFO][4378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221868 containerd[1597]: 2025-09-12 17:47:21.202 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07e6d66bc9f ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221868 containerd[1597]: 2025-09-12 17:47:21.205 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.221936 containerd[1597]: 2025-09-12 17:47:21.205 [INFO][4378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8247fe61-fcea-4419-acd7-3eb1ca909f32", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392", Pod:"coredns-674b8bbfcf-q4bpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali07e6d66bc9f", MAC:"42:30:4c:05:e3:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:21.221936 containerd[1597]: 2025-09-12 17:47:21.214 [INFO][4378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" Namespace="kube-system" Pod="coredns-674b8bbfcf-q4bpf" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--q4bpf-eth0" Sep 12 17:47:21.267427 containerd[1597]: time="2025-09-12T17:47:21.267352933Z" level=info msg="connecting to shim 41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392" address="unix:///run/containerd/s/b9b90a8c81f8e87b5556d125f3be6cd2e86c34d06e9d6a48c10525b643dc1612" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:21.304977 systemd[1]: Started cri-containerd-41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392.scope - libcontainer container 41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392. Sep 12 17:47:21.333660 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:21.335443 systemd-networkd[1497]: calia3d9ae3b7f2: Link UP Sep 12 17:47:21.336492 systemd-networkd[1497]: calia3d9ae3b7f2: Gained carrier Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.141 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9vzb4-eth0 csi-node-driver- calico-system 9e0c15c0-7f27-4581-beb0-d93114983a4f 781 0 2025-09-12 17:46:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9vzb4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia3d9ae3b7f2 [] [] }} ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.141 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.177 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" HandleID="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Workload="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.177 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" HandleID="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Workload="localhost-k8s-csi--node--driver--9vzb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9vzb4", "timestamp":"2025-09-12 17:47:21.177548766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.177 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.200 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.200 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.284 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.290 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.296 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.299 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.301 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.301 [INFO][4407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.303 [INFO][4407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9 Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.312 [INFO][4407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.323 [INFO][4407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.324 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" host="localhost" Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.324 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:21.352299 containerd[1597]: 2025-09-12 17:47:21.324 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" HandleID="k8s-pod-network.82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Workload="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.330 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9vzb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e0c15c0-7f27-4581-beb0-d93114983a4f", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9vzb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3d9ae3b7f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.330 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.330 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3d9ae3b7f2 ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.336 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.337 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9vzb4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e0c15c0-7f27-4581-beb0-d93114983a4f", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9", Pod:"csi-node-driver-9vzb4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia3d9ae3b7f2", MAC:"aa:b0:1d:b5:23:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:21.352896 containerd[1597]: 2025-09-12 17:47:21.348 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" Namespace="calico-system" Pod="csi-node-driver-9vzb4" WorkloadEndpoint="localhost-k8s-csi--node--driver--9vzb4-eth0" Sep 12 17:47:21.376515 containerd[1597]: time="2025-09-12T17:47:21.376464706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q4bpf,Uid:8247fe61-fcea-4419-acd7-3eb1ca909f32,Namespace:kube-system,Attempt:0,} returns sandbox id \"41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392\"" Sep 12 17:47:21.377526 kubelet[2754]: E0912 17:47:21.377445 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:21.380675 containerd[1597]: time="2025-09-12T17:47:21.380422959Z" level=info msg="connecting to shim 82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9" address="unix:///run/containerd/s/dae33eedaafe268db24187a6008dc21998e89dbfc9205dfcbc9d739fab6ab123" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:21.386833 containerd[1597]: time="2025-09-12T17:47:21.386808109Z" level=info msg="CreateContainer within sandbox \"41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:47:21.403921 containerd[1597]: time="2025-09-12T17:47:21.403869508Z" level=info msg="Container 1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:21.409974 containerd[1597]: time="2025-09-12T17:47:21.409944556Z" level=info msg="CreateContainer within sandbox \"41bd60bf3f3fee306f0b15c3cb836b56cb312dd5e99c2420e6ed6d80db708392\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3\"" Sep 12 17:47:21.410632 containerd[1597]: time="2025-09-12T17:47:21.410601470Z" level=info msg="StartContainer for \"1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3\"" Sep 12 17:47:21.411018 systemd[1]: Started cri-containerd-82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9.scope - libcontainer container 82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9. Sep 12 17:47:21.412168 containerd[1597]: time="2025-09-12T17:47:21.412147292Z" level=info msg="connecting to shim 1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3" address="unix:///run/containerd/s/b9b90a8c81f8e87b5556d125f3be6cd2e86c34d06e9d6a48c10525b643dc1612" protocol=ttrpc version=3 Sep 12 17:47:21.428651 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:21.435052 systemd[1]: Started cri-containerd-1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3.scope - libcontainer container 1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3. Sep 12 17:47:21.448009 containerd[1597]: time="2025-09-12T17:47:21.447965382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vzb4,Uid:9e0c15c0-7f27-4581-beb0-d93114983a4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9\"" Sep 12 17:47:21.474791 containerd[1597]: time="2025-09-12T17:47:21.474742324Z" level=info msg="StartContainer for \"1b29ed3445fd65b3a995edc308812bc828cc040b6095faf4b7d285a35b223ac3\" returns successfully" Sep 12 17:47:21.622938 systemd-networkd[1497]: cali74122519f14: Gained IPv6LL Sep 12 17:47:21.907571 containerd[1597]: time="2025-09-12T17:47:21.907502283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7swhj,Uid:5e30145f-893a-4aac-8f16-6ea98bb69360,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:22.036390 kubelet[2754]: E0912 17:47:22.036348 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:22.198040 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Sep 12 17:47:22.229767 kubelet[2754]: I0912 17:47:22.228863 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q4bpf" podStartSLOduration=38.228834008 podStartE2EDuration="38.228834008s" podCreationTimestamp="2025-09-12 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:47:22.22846228 +0000 UTC m=+45.406085253" watchObservedRunningTime="2025-09-12 17:47:22.228834008 +0000 UTC m=+45.406457001" Sep 12 17:47:22.262003 systemd-networkd[1497]: cali07e6d66bc9f: Gained IPv6LL Sep 12 17:47:22.607069 systemd-networkd[1497]: calidd42f4eaac3: Link UP Sep 12 17:47:22.608102 systemd-networkd[1497]: calidd42f4eaac3: Gained carrier Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.233 [INFO][4570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--7swhj-eth0 goldmane-54d579b49d- calico-system 5e30145f-893a-4aac-8f16-6ea98bb69360 892 0 2025-09-12 17:46:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-7swhj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidd42f4eaac3 [] [] }} ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.233 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.429 [INFO][4586] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" HandleID="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Workload="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.429 [INFO][4586] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" HandleID="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Workload="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-7swhj", "timestamp":"2025-09-12 17:47:22.428996293 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.429 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.429 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.429 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.507 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.542 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.547 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.549 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.551 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.551 [INFO][4586] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.553 [INFO][4586] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65 Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.590 [INFO][4586] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.599 [INFO][4586] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.599 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" host="localhost" Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.599 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:22.668657 containerd[1597]: 2025-09-12 17:47:22.599 [INFO][4586] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" HandleID="k8s-pod-network.bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Workload="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.603 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7swhj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5e30145f-893a-4aac-8f16-6ea98bb69360", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-7swhj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidd42f4eaac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.603 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.603 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd42f4eaac3 ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.608 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.609 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7swhj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5e30145f-893a-4aac-8f16-6ea98bb69360", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65", Pod:"goldmane-54d579b49d-7swhj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidd42f4eaac3", MAC:"6a:48:3a:d7:83:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:22.669722 containerd[1597]: 2025-09-12 17:47:22.664 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" Namespace="calico-system" Pod="goldmane-54d579b49d-7swhj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7swhj-eth0" Sep 12 17:47:22.837996 systemd-networkd[1497]: calia3d9ae3b7f2: Gained IPv6LL Sep 12 17:47:22.959830 containerd[1597]: time="2025-09-12T17:47:22.959763548Z" level=info msg="connecting to shim bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65" address="unix:///run/containerd/s/4b3d35596364e74cd1870fde471cdb0928596bfccdd2158568e38d7983a0719e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:22.988924 systemd[1]: Started cri-containerd-bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65.scope - libcontainer container bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65. Sep 12 17:47:23.005587 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:23.040433 kubelet[2754]: E0912 17:47:23.040385 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:23.050214 containerd[1597]: time="2025-09-12T17:47:23.050155054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7swhj,Uid:5e30145f-893a-4aac-8f16-6ea98bb69360,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65\"" Sep 12 17:47:23.230274 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:54120.service - OpenSSH per-connection server daemon (10.0.0.1:54120). Sep 12 17:47:23.306762 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 54120 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:23.318671 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:23.323505 systemd-logind[1585]: New session 8 of user core. Sep 12 17:47:23.333867 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:47:23.554226 sshd[4661]: Connection closed by 10.0.0.1 port 54120 Sep 12 17:47:23.554510 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:23.558882 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:54120.service: Deactivated successfully. Sep 12 17:47:23.561155 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:47:23.562106 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:47:23.563640 systemd-logind[1585]: Removed session 8. Sep 12 17:47:23.733948 systemd-networkd[1497]: calidd42f4eaac3: Gained IPv6LL Sep 12 17:47:23.907132 kubelet[2754]: E0912 17:47:23.907074 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:23.907618 containerd[1597]: time="2025-09-12T17:47:23.907579996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhjwx,Uid:bfd2ea8b-8830-43b1-b23b-cc1ff8873bda,Namespace:kube-system,Attempt:0,}" Sep 12 17:47:23.907884 containerd[1597]: time="2025-09-12T17:47:23.907586899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-ck2cf,Uid:e24c2194-063a-4aa0-a000-c103a3e25873,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:47:24.047015 kubelet[2754]: E0912 17:47:24.046960 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:24.157857 systemd-networkd[1497]: calib71a4df39a1: Link UP Sep 12 17:47:24.159819 systemd-networkd[1497]: calib71a4df39a1: Gained carrier Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.065 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0 calico-apiserver-59484845b9- calico-apiserver e24c2194-063a-4aa0-a000-c103a3e25873 891 0 2025-09-12 17:46:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59484845b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59484845b9-ck2cf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib71a4df39a1 [] [] }} ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.066 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.101 [INFO][4711] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" HandleID="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Workload="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.102 [INFO][4711] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" HandleID="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Workload="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59484845b9-ck2cf", "timestamp":"2025-09-12 17:47:24.101474685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.102 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.102 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.102 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.114 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.120 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.124 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.128 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.131 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.131 [INFO][4711] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.133 [INFO][4711] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.137 [INFO][4711] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.149 [INFO][4711] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.149 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" host="localhost" Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.149 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:24.171541 containerd[1597]: 2025-09-12 17:47:24.149 [INFO][4711] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" HandleID="k8s-pod-network.8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Workload="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.152 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0", GenerateName:"calico-apiserver-59484845b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e24c2194-063a-4aa0-a000-c103a3e25873", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59484845b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59484845b9-ck2cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib71a4df39a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.153 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.153 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib71a4df39a1 ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.160 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.160 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0", GenerateName:"calico-apiserver-59484845b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e24c2194-063a-4aa0-a000-c103a3e25873", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59484845b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd", Pod:"calico-apiserver-59484845b9-ck2cf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib71a4df39a1", MAC:"42:45:29:76:69:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:24.172468 containerd[1597]: 2025-09-12 17:47:24.168 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-ck2cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--ck2cf-eth0" Sep 12 17:47:24.173304 containerd[1597]: time="2025-09-12T17:47:24.173275469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:24.174629 containerd[1597]: time="2025-09-12T17:47:24.174594956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:47:24.175945 containerd[1597]: time="2025-09-12T17:47:24.175916747Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:24.179106 containerd[1597]: time="2025-09-12T17:47:24.179076860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:24.179799 containerd[1597]: time="2025-09-12T17:47:24.179773177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.328812996s" Sep 12 17:47:24.179862 containerd[1597]: time="2025-09-12T17:47:24.179799487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:47:24.180759 containerd[1597]: time="2025-09-12T17:47:24.180703794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:47:24.186012 containerd[1597]: time="2025-09-12T17:47:24.185944482Z" level=info msg="CreateContainer within sandbox \"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:47:24.215462 containerd[1597]: time="2025-09-12T17:47:24.212872054Z" level=info msg="Container 96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:24.216983 containerd[1597]: time="2025-09-12T17:47:24.216932668Z" level=info msg="connecting to shim 8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd" address="unix:///run/containerd/s/fa9f695c09769fe268e48314262230702984b0c97825f7729b24720beca51cf2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:24.226254 containerd[1597]: time="2025-09-12T17:47:24.226124564Z" level=info msg="CreateContainer within sandbox \"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08\"" Sep 12 17:47:24.226921 containerd[1597]: time="2025-09-12T17:47:24.226881505Z" level=info msg="StartContainer for \"96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08\"" Sep 12 17:47:24.228284 containerd[1597]: time="2025-09-12T17:47:24.228184030Z" level=info msg="connecting to shim 96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08" address="unix:///run/containerd/s/9212f2c346873ed277ae1d38062f3bcefe23bbf3797096a22815975acbd2881a" protocol=ttrpc version=3 Sep 12 17:47:24.246696 systemd[1]: Started cri-containerd-8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd.scope - libcontainer container 8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd. Sep 12 17:47:24.252501 systemd[1]: Started cri-containerd-96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08.scope - libcontainer container 96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08. Sep 12 17:47:24.264020 systemd-networkd[1497]: calid4868756c20: Link UP Sep 12 17:47:24.265082 systemd-networkd[1497]: calid4868756c20: Gained carrier Sep 12 17:47:24.269357 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.070 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0 coredns-674b8bbfcf- kube-system bfd2ea8b-8830-43b1-b23b-cc1ff8873bda 894 0 2025-09-12 17:46:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fhjwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4868756c20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.070 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.112 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" HandleID="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Workload="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.112 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" HandleID="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Workload="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b16a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fhjwx", "timestamp":"2025-09-12 17:47:24.112059687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.112 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.149 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.150 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.218 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.223 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.227 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.230 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.234 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.234 [INFO][4712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.236 [INFO][4712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.243 [INFO][4712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.255 [INFO][4712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.256 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" host="localhost" Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.256 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:24.281758 containerd[1597]: 2025-09-12 17:47:24.256 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" HandleID="k8s-pod-network.4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Workload="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.260 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bfd2ea8b-8830-43b1-b23b-cc1ff8873bda", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fhjwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4868756c20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.260 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.260 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4868756c20 ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.265 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.266 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bfd2ea8b-8830-43b1-b23b-cc1ff8873bda", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c", Pod:"coredns-674b8bbfcf-fhjwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4868756c20", MAC:"86:6d:b9:81:1d:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:24.282381 containerd[1597]: 2025-09-12 17:47:24.278 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhjwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhjwx-eth0" Sep 12 17:47:24.313339 containerd[1597]: time="2025-09-12T17:47:24.313293629Z" level=info msg="connecting to shim 4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c" address="unix:///run/containerd/s/75a0b24a8484bb6d58ed4c568308d5d36c2db240f0bf201d84a9c4a736733a2d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:24.319812 containerd[1597]: time="2025-09-12T17:47:24.319541057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-ck2cf,Uid:e24c2194-063a-4aa0-a000-c103a3e25873,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd\"" Sep 12 17:47:24.337158 containerd[1597]: time="2025-09-12T17:47:24.337005735Z" level=info msg="StartContainer for \"96987281c14e0e4774442598ac4d3af62b4a79211be73734178c249c1f2f7c08\" returns successfully" Sep 12 17:47:24.345126 systemd[1]: Started cri-containerd-4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c.scope - libcontainer container 4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c. Sep 12 17:47:24.359338 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:24.394682 containerd[1597]: time="2025-09-12T17:47:24.394621108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhjwx,Uid:bfd2ea8b-8830-43b1-b23b-cc1ff8873bda,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c\"" Sep 12 17:47:24.395622 kubelet[2754]: E0912 17:47:24.395582 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:24.401348 containerd[1597]: time="2025-09-12T17:47:24.401272935Z" level=info msg="CreateContainer within sandbox \"4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:47:24.411716 containerd[1597]: time="2025-09-12T17:47:24.411622404Z" level=info msg="Container faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:24.420066 containerd[1597]: time="2025-09-12T17:47:24.420003218Z" level=info msg="CreateContainer within sandbox \"4dba28fba3682cb70f145e8e13291d2c55ce04899ec6650226df782de3e3b50c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912\"" Sep 12 17:47:24.420608 containerd[1597]: time="2025-09-12T17:47:24.420570734Z" level=info msg="StartContainer for \"faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912\"" Sep 12 17:47:24.421937 containerd[1597]: time="2025-09-12T17:47:24.421901882Z" level=info msg="connecting to shim faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912" address="unix:///run/containerd/s/75a0b24a8484bb6d58ed4c568308d5d36c2db240f0bf201d84a9c4a736733a2d" protocol=ttrpc version=3 Sep 12 17:47:24.447924 systemd[1]: Started cri-containerd-faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912.scope - libcontainer container faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912. Sep 12 17:47:24.490030 containerd[1597]: time="2025-09-12T17:47:24.489977593Z" level=info msg="StartContainer for \"faaac15af647ebe464d879c32cf22d0b18f56f5b6d8b1242e0fe70d552389912\" returns successfully" Sep 12 17:47:24.906986 containerd[1597]: time="2025-09-12T17:47:24.906928657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-r62pg,Uid:820b7569-1a21-437a-9be7-3d6ea66544ce,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:47:24.907169 containerd[1597]: time="2025-09-12T17:47:24.907057890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7996b8f8fb-fs4k5,Uid:b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585,Namespace:calico-system,Attempt:0,}" Sep 12 17:47:25.049466 kubelet[2754]: E0912 17:47:25.049407 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:25.404758 kubelet[2754]: I0912 17:47:25.404534 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fhjwx" podStartSLOduration=41.404512488 podStartE2EDuration="41.404512488s" podCreationTimestamp="2025-09-12 17:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:47:25.404397592 +0000 UTC m=+48.582020565" watchObservedRunningTime="2025-09-12 17:47:25.404512488 +0000 UTC m=+48.582135461" Sep 12 17:47:25.909923 systemd-networkd[1497]: calid4868756c20: Gained IPv6LL Sep 12 17:47:26.054834 kubelet[2754]: E0912 17:47:26.054797 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:26.101944 systemd-networkd[1497]: calib71a4df39a1: Gained IPv6LL Sep 12 17:47:26.222248 systemd-networkd[1497]: calif35fa42687e: Link UP Sep 12 17:47:26.223651 systemd-networkd[1497]: calif35fa42687e: Gained carrier Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.856 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0 calico-apiserver-59484845b9- calico-apiserver 820b7569-1a21-437a-9be7-3d6ea66544ce 890 0 2025-09-12 17:46:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59484845b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59484845b9-r62pg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif35fa42687e [] [] }} ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.856 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.931 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" HandleID="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Workload="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.931 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" HandleID="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Workload="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59484845b9-r62pg", "timestamp":"2025-09-12 17:47:25.931130828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.931 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.931 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.931 [INFO][4940] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.967 [INFO][4940] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.975 [INFO][4940] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.984 [INFO][4940] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.989 [INFO][4940] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.996 [INFO][4940] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:25.996 [INFO][4940] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.006 [INFO][4940] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5 Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.172 [INFO][4940] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4940] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4940] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" host="localhost" Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:26.262890 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" HandleID="k8s-pod-network.c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Workload="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.218 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0", GenerateName:"calico-apiserver-59484845b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"820b7569-1a21-437a-9be7-3d6ea66544ce", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59484845b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59484845b9-r62pg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif35fa42687e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.218 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.218 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35fa42687e ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.223 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.226 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0", GenerateName:"calico-apiserver-59484845b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"820b7569-1a21-437a-9be7-3d6ea66544ce", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59484845b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5", Pod:"calico-apiserver-59484845b9-r62pg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif35fa42687e", MAC:"d6:49:9a:90:a0:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:26.264514 containerd[1597]: 2025-09-12 17:47:26.258 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" Namespace="calico-apiserver" Pod="calico-apiserver-59484845b9-r62pg" WorkloadEndpoint="localhost-k8s-calico--apiserver--59484845b9--r62pg-eth0" Sep 12 17:47:26.294565 systemd-networkd[1497]: cali7f0e3ea73f9: Link UP Sep 12 17:47:26.295646 systemd-networkd[1497]: cali7f0e3ea73f9: Gained carrier Sep 12 17:47:26.309240 containerd[1597]: time="2025-09-12T17:47:26.309155948Z" level=info msg="connecting to shim c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5" address="unix:///run/containerd/s/52d06808c2dd824ae211a699ad46a77047205932b5bad398857f575fdd503bc3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:25.890 [INFO][4922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0 calico-kube-controllers-7996b8f8fb- calico-system b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585 895 0 2025-09-12 17:46:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7996b8f8fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7996b8f8fb-fs4k5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f0e3ea73f9 [] [] }} ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:25.890 [INFO][4922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:25.999 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" HandleID="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Workload="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.000 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" HandleID="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Workload="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7996b8f8fb-fs4k5", "timestamp":"2025-09-12 17:47:25.999832012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.000 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.214 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.223 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.233 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.262 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.266 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.270 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.270 [INFO][4954] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.273 [INFO][4954] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0 Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.278 [INFO][4954] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.283 [INFO][4954] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.283 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" host="localhost" Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.283 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:47:26.329407 containerd[1597]: 2025-09-12 17:47:26.283 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" HandleID="k8s-pod-network.79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Workload="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.289 [INFO][4922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0", GenerateName:"calico-kube-controllers-7996b8f8fb-", Namespace:"calico-system", SelfLink:"", UID:"b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7996b8f8fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7996b8f8fb-fs4k5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f0e3ea73f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.290 [INFO][4922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.290 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f0e3ea73f9 ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.297 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.299 [INFO][4922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0", GenerateName:"calico-kube-controllers-7996b8f8fb-", Namespace:"calico-system", SelfLink:"", UID:"b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7996b8f8fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0", Pod:"calico-kube-controllers-7996b8f8fb-fs4k5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f0e3ea73f9", MAC:"26:93:27:1d:c1:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:47:26.330642 containerd[1597]: 2025-09-12 17:47:26.321 [INFO][4922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" Namespace="calico-system" Pod="calico-kube-controllers-7996b8f8fb-fs4k5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7996b8f8fb--fs4k5-eth0" Sep 12 17:47:26.344232 systemd[1]: Started cri-containerd-c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5.scope - libcontainer container c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5. Sep 12 17:47:26.360698 containerd[1597]: time="2025-09-12T17:47:26.359454982Z" level=info msg="connecting to shim 79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0" address="unix:///run/containerd/s/7238c6c19d8c1e2cc4f6e337b5b44edf07c83050426a8a61af1fc6f95f1e7fe8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:47:26.366020 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:26.395169 systemd[1]: Started cri-containerd-79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0.scope - libcontainer container 79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0. Sep 12 17:47:26.405963 containerd[1597]: time="2025-09-12T17:47:26.405919378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59484845b9-r62pg,Uid:820b7569-1a21-437a-9be7-3d6ea66544ce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5\"" Sep 12 17:47:26.412253 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:47:26.444463 containerd[1597]: time="2025-09-12T17:47:26.444393044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7996b8f8fb-fs4k5,Uid:b604b9a3-e2b4-42d6-a4ad-7a0abbdc8585,Namespace:calico-system,Attempt:0,} returns sandbox id \"79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0\"" Sep 12 17:47:26.982648 containerd[1597]: time="2025-09-12T17:47:26.982583304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:26.984705 containerd[1597]: time="2025-09-12T17:47:26.984644112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:47:26.987925 containerd[1597]: time="2025-09-12T17:47:26.987871940Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:26.991245 containerd[1597]: time="2025-09-12T17:47:26.991175471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:26.991781 containerd[1597]: time="2025-09-12T17:47:26.991745822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.810950155s" Sep 12 17:47:26.991781 containerd[1597]: time="2025-09-12T17:47:26.991778763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:47:26.993068 containerd[1597]: time="2025-09-12T17:47:26.992811353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:47:27.002545 containerd[1597]: time="2025-09-12T17:47:27.002490800Z" level=info msg="CreateContainer within sandbox \"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:47:27.022120 containerd[1597]: time="2025-09-12T17:47:27.022042838Z" level=info msg="Container 9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:27.036161 containerd[1597]: time="2025-09-12T17:47:27.036096546Z" level=info msg="CreateContainer within sandbox \"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc\"" Sep 12 17:47:27.036886 containerd[1597]: time="2025-09-12T17:47:27.036835833Z" level=info msg="StartContainer for \"9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc\"" Sep 12 17:47:27.039029 containerd[1597]: time="2025-09-12T17:47:27.038987261Z" level=info msg="connecting to shim 9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc" address="unix:///run/containerd/s/dae33eedaafe268db24187a6008dc21998e89dbfc9205dfcbc9d739fab6ab123" protocol=ttrpc version=3 Sep 12 17:47:27.063925 systemd[1]: Started cri-containerd-9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc.scope - libcontainer container 9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc. Sep 12 17:47:27.066197 kubelet[2754]: E0912 17:47:27.066156 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:27.186195 containerd[1597]: time="2025-09-12T17:47:27.186131830Z" level=info msg="StartContainer for \"9441508e8f562ccd83130ffc91164291cc7ec6c6d3f4002b7162423edea066bc\" returns successfully" Sep 12 17:47:27.573987 systemd-networkd[1497]: cali7f0e3ea73f9: Gained IPv6LL Sep 12 17:47:28.021965 systemd-networkd[1497]: calif35fa42687e: Gained IPv6LL Sep 12 17:47:28.567877 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:54128.service - OpenSSH per-connection server daemon (10.0.0.1:54128). Sep 12 17:47:28.656121 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 54128 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:28.658070 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:28.663651 systemd-logind[1585]: New session 9 of user core. Sep 12 17:47:28.669855 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:47:28.841074 sshd[5111]: Connection closed by 10.0.0.1 port 54128 Sep 12 17:47:28.841402 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:28.846874 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:54128.service: Deactivated successfully. Sep 12 17:47:28.849383 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:47:28.850304 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:47:28.851769 systemd-logind[1585]: Removed session 9. Sep 12 17:47:30.154625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566662454.mount: Deactivated successfully. Sep 12 17:47:30.794207 containerd[1597]: time="2025-09-12T17:47:30.794130372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:30.795090 containerd[1597]: time="2025-09-12T17:47:30.795030230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:47:30.796190 containerd[1597]: time="2025-09-12T17:47:30.796156604Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:30.798291 containerd[1597]: time="2025-09-12T17:47:30.798253841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:30.798942 containerd[1597]: time="2025-09-12T17:47:30.798895284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.806055629s" Sep 12 17:47:30.798942 containerd[1597]: time="2025-09-12T17:47:30.798938485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:47:30.799763 containerd[1597]: time="2025-09-12T17:47:30.799718739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:47:30.810455 containerd[1597]: time="2025-09-12T17:47:30.810421494Z" level=info msg="CreateContainer within sandbox \"bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:47:30.819857 containerd[1597]: time="2025-09-12T17:47:30.819818699Z" level=info msg="Container 3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:30.828321 containerd[1597]: time="2025-09-12T17:47:30.828277001Z" level=info msg="CreateContainer within sandbox \"bb12a1d9efb0fb1b7631a7fd673035cea6729133f928c194e33baf16ba84cc65\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\"" Sep 12 17:47:30.828835 containerd[1597]: time="2025-09-12T17:47:30.828813097Z" level=info msg="StartContainer for \"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\"" Sep 12 17:47:30.830068 containerd[1597]: time="2025-09-12T17:47:30.829962124Z" level=info msg="connecting to shim 3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a" address="unix:///run/containerd/s/4b3d35596364e74cd1870fde471cdb0928596bfccdd2158568e38d7983a0719e" protocol=ttrpc version=3 Sep 12 17:47:30.861938 systemd[1]: Started cri-containerd-3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a.scope - libcontainer container 3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a. Sep 12 17:47:30.915567 containerd[1597]: time="2025-09-12T17:47:30.915516072Z" level=info msg="StartContainer for \"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\" returns successfully" Sep 12 17:47:31.104197 kubelet[2754]: I0912 17:47:31.103001 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-7swhj" podStartSLOduration=28.35482851 podStartE2EDuration="36.102961849s" podCreationTimestamp="2025-09-12 17:46:55 +0000 UTC" firstStartedPulling="2025-09-12 17:47:23.051473289 +0000 UTC m=+46.229096263" lastFinishedPulling="2025-09-12 17:47:30.799606629 +0000 UTC m=+53.977229602" observedRunningTime="2025-09-12 17:47:31.100541588 +0000 UTC m=+54.278164561" watchObservedRunningTime="2025-09-12 17:47:31.102961849 +0000 UTC m=+54.280584822" Sep 12 17:47:31.178873 containerd[1597]: time="2025-09-12T17:47:31.178808344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\" id:\"7c53484c56a86e362c6e78ca4315b0ac0bb4c45d1493131929fae652ebdf37ca\" pid:5190 exited_at:{seconds:1757699251 nanos:178331900}" Sep 12 17:47:33.858848 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:58652.service - OpenSSH per-connection server daemon (10.0.0.1:58652). Sep 12 17:47:34.149070 sshd[5213]: Accepted publickey for core from 10.0.0.1 port 58652 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:34.151040 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:34.155837 systemd-logind[1585]: New session 10 of user core. Sep 12 17:47:34.164883 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:47:34.293414 sshd[5216]: Connection closed by 10.0.0.1 port 58652 Sep 12 17:47:34.293802 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:34.300195 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:58652.service: Deactivated successfully. Sep 12 17:47:34.302215 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:47:34.303198 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:47:34.304556 systemd-logind[1585]: Removed session 10. Sep 12 17:47:34.492246 containerd[1597]: time="2025-09-12T17:47:34.492025008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.495704 containerd[1597]: time="2025-09-12T17:47:34.495663374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:47:34.500712 containerd[1597]: time="2025-09-12T17:47:34.500657704Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.505956 containerd[1597]: time="2025-09-12T17:47:34.505911572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:34.509108 containerd[1597]: time="2025-09-12T17:47:34.509059859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.709291226s" Sep 12 17:47:34.509108 containerd[1597]: time="2025-09-12T17:47:34.509098933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:47:34.510385 containerd[1597]: time="2025-09-12T17:47:34.510096674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:47:34.515411 containerd[1597]: time="2025-09-12T17:47:34.515376290Z" level=info msg="CreateContainer within sandbox \"8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:47:34.525787 containerd[1597]: time="2025-09-12T17:47:34.525018791Z" level=info msg="Container 74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:34.536964 containerd[1597]: time="2025-09-12T17:47:34.536917216Z" level=info msg="CreateContainer within sandbox \"8ef2ffdbf429bdc2b1c21686e84e98dcdc690a428934f43b0a4f88aa7242b7bd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac\"" Sep 12 17:47:34.537539 containerd[1597]: time="2025-09-12T17:47:34.537503305Z" level=info msg="StartContainer for \"74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac\"" Sep 12 17:47:34.538574 containerd[1597]: time="2025-09-12T17:47:34.538542074Z" level=info msg="connecting to shim 74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac" address="unix:///run/containerd/s/fa9f695c09769fe268e48314262230702984b0c97825f7729b24720beca51cf2" protocol=ttrpc version=3 Sep 12 17:47:34.603877 systemd[1]: Started cri-containerd-74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac.scope - libcontainer container 74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac. Sep 12 17:47:34.665648 containerd[1597]: time="2025-09-12T17:47:34.665595544Z" level=info msg="StartContainer for \"74c31d51710630e33e93b8ce19fbcdf70897905e85ffd001becadc673887ecac\" returns successfully" Sep 12 17:47:35.109216 kubelet[2754]: I0912 17:47:35.109127 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59484845b9-ck2cf" podStartSLOduration=31.922076193 podStartE2EDuration="42.10911089s" podCreationTimestamp="2025-09-12 17:46:53 +0000 UTC" firstStartedPulling="2025-09-12 17:47:24.322904402 +0000 UTC m=+47.500527375" lastFinishedPulling="2025-09-12 17:47:34.509939089 +0000 UTC m=+57.687562072" observedRunningTime="2025-09-12 17:47:35.108869667 +0000 UTC m=+58.286492630" watchObservedRunningTime="2025-09-12 17:47:35.10911089 +0000 UTC m=+58.286733863" Sep 12 17:47:38.343127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272311469.mount: Deactivated successfully. Sep 12 17:47:38.620679 containerd[1597]: time="2025-09-12T17:47:38.620524787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:38.622148 containerd[1597]: time="2025-09-12T17:47:38.622099831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:47:38.623125 containerd[1597]: time="2025-09-12T17:47:38.623066524Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:38.626015 containerd[1597]: time="2025-09-12T17:47:38.625969260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:38.626590 containerd[1597]: time="2025-09-12T17:47:38.626549087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.116393151s" Sep 12 17:47:38.626590 containerd[1597]: time="2025-09-12T17:47:38.626579023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:47:38.627964 containerd[1597]: time="2025-09-12T17:47:38.627587365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:47:38.631887 containerd[1597]: time="2025-09-12T17:47:38.631849340Z" level=info msg="CreateContainer within sandbox \"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:47:38.642975 containerd[1597]: time="2025-09-12T17:47:38.642910551Z" level=info msg="Container edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:38.652095 containerd[1597]: time="2025-09-12T17:47:38.652041020Z" level=info msg="CreateContainer within sandbox \"b5129859b8d634a2eec26d6f1adee1d0e0c76f0a39beb80fd4bc1ca09ee4de00\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9\"" Sep 12 17:47:38.658117 containerd[1597]: time="2025-09-12T17:47:38.658056724Z" level=info msg="StartContainer for \"edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9\"" Sep 12 17:47:38.659444 containerd[1597]: time="2025-09-12T17:47:38.659303743Z" level=info msg="connecting to shim edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9" address="unix:///run/containerd/s/9212f2c346873ed277ae1d38062f3bcefe23bbf3797096a22815975acbd2881a" protocol=ttrpc version=3 Sep 12 17:47:38.692923 systemd[1]: Started cri-containerd-edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9.scope - libcontainer container edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9. Sep 12 17:47:38.744002 containerd[1597]: time="2025-09-12T17:47:38.743955535Z" level=info msg="StartContainer for \"edfe0941db0268dd855be6bc8e7355e69df80116a1027bdc8eea059b1b3683b9\" returns successfully" Sep 12 17:47:39.218381 kubelet[2754]: I0912 17:47:39.218258 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fdd8fcf8c-xdb8c" podStartSLOduration=2.4387272700000002 podStartE2EDuration="20.218238711s" podCreationTimestamp="2025-09-12 17:47:19 +0000 UTC" firstStartedPulling="2025-09-12 17:47:20.847938056 +0000 UTC m=+44.025561019" lastFinishedPulling="2025-09-12 17:47:38.627449487 +0000 UTC m=+61.805072460" observedRunningTime="2025-09-12 17:47:39.218008149 +0000 UTC m=+62.395631122" watchObservedRunningTime="2025-09-12 17:47:39.218238711 +0000 UTC m=+62.395861684" Sep 12 17:47:39.248081 containerd[1597]: time="2025-09-12T17:47:39.248013486Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:39.249239 containerd[1597]: time="2025-09-12T17:47:39.249197136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:47:39.252301 containerd[1597]: time="2025-09-12T17:47:39.252163270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 624.535348ms" Sep 12 17:47:39.252301 containerd[1597]: time="2025-09-12T17:47:39.252212693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:47:39.255136 containerd[1597]: time="2025-09-12T17:47:39.255101582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:47:39.259358 containerd[1597]: time="2025-09-12T17:47:39.259318703Z" level=info msg="CreateContainer within sandbox \"c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:47:39.270897 containerd[1597]: time="2025-09-12T17:47:39.270817854Z" level=info msg="Container 1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:39.280523 containerd[1597]: time="2025-09-12T17:47:39.280469199Z" level=info msg="CreateContainer within sandbox \"c43cf937e4b3f0ae0256ce5aa1fda290d049d941bc53a303f59527d6169993c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe\"" Sep 12 17:47:39.281433 containerd[1597]: time="2025-09-12T17:47:39.281402659Z" level=info msg="StartContainer for \"1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe\"" Sep 12 17:47:39.282900 containerd[1597]: time="2025-09-12T17:47:39.282857870Z" level=info msg="connecting to shim 1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe" address="unix:///run/containerd/s/52d06808c2dd824ae211a699ad46a77047205932b5bad398857f575fdd503bc3" protocol=ttrpc version=3 Sep 12 17:47:39.324995 systemd[1]: Started cri-containerd-1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe.scope - libcontainer container 1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe. Sep 12 17:47:39.326837 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:58654.service - OpenSSH per-connection server daemon (10.0.0.1:58654). Sep 12 17:47:39.399437 containerd[1597]: time="2025-09-12T17:47:39.399385492Z" level=info msg="StartContainer for \"1656975e050ced6b0e6d8dedf73aabb86e9872c403a2d237b9898d5cb9d463fe\" returns successfully" Sep 12 17:47:39.405200 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 58654 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:39.408065 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:39.415336 systemd-logind[1585]: New session 11 of user core. Sep 12 17:47:39.422977 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:47:39.567656 sshd[5355]: Connection closed by 10.0.0.1 port 58654 Sep 12 17:47:39.569006 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:39.578711 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:58654.service: Deactivated successfully. Sep 12 17:47:39.581248 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:47:39.582252 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:47:39.585296 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:58666.service - OpenSSH per-connection server daemon (10.0.0.1:58666). Sep 12 17:47:39.587029 systemd-logind[1585]: Removed session 11. Sep 12 17:47:39.645408 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 58666 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:39.647450 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:39.653787 systemd-logind[1585]: New session 12 of user core. Sep 12 17:47:39.660319 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:47:39.836839 sshd[5376]: Connection closed by 10.0.0.1 port 58666 Sep 12 17:47:39.838058 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:39.849191 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:58666.service: Deactivated successfully. Sep 12 17:47:39.853922 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:47:39.858792 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:47:39.864012 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:58672.service - OpenSSH per-connection server daemon (10.0.0.1:58672). Sep 12 17:47:39.865987 systemd-logind[1585]: Removed session 12. Sep 12 17:47:39.913522 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 58672 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:39.915167 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:39.920004 systemd-logind[1585]: New session 13 of user core. Sep 12 17:47:39.929253 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:47:40.053454 sshd[5392]: Connection closed by 10.0.0.1 port 58672 Sep 12 17:47:40.053793 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:40.058376 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:58672.service: Deactivated successfully. Sep 12 17:47:40.060523 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:47:40.061561 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:47:40.063362 systemd-logind[1585]: Removed session 13. Sep 12 17:47:41.361571 kubelet[2754]: I0912 17:47:41.361502 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59484845b9-r62pg" podStartSLOduration=35.514888659 podStartE2EDuration="48.36148504s" podCreationTimestamp="2025-09-12 17:46:53 +0000 UTC" firstStartedPulling="2025-09-12 17:47:26.407564336 +0000 UTC m=+49.585187309" lastFinishedPulling="2025-09-12 17:47:39.254160717 +0000 UTC m=+62.431783690" observedRunningTime="2025-09-12 17:47:40.122749253 +0000 UTC m=+63.300372256" watchObservedRunningTime="2025-09-12 17:47:41.36148504 +0000 UTC m=+64.539108013" Sep 12 17:47:45.043115 containerd[1597]: time="2025-09-12T17:47:45.043037227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.048466 containerd[1597]: time="2025-09-12T17:47:45.048431957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:47:45.058238 containerd[1597]: time="2025-09-12T17:47:45.058167483Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.060762 containerd[1597]: time="2025-09-12T17:47:45.060693998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:45.061779 containerd[1597]: time="2025-09-12T17:47:45.061488201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.80635008s" Sep 12 17:47:45.061779 containerd[1597]: time="2025-09-12T17:47:45.061533418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:47:45.063128 containerd[1597]: time="2025-09-12T17:47:45.063102806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:47:45.070000 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:52296.service - OpenSSH per-connection server daemon (10.0.0.1:52296). Sep 12 17:47:45.082327 containerd[1597]: time="2025-09-12T17:47:45.082276103Z" level=info msg="CreateContainer within sandbox \"79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:47:45.093503 containerd[1597]: time="2025-09-12T17:47:45.092979577Z" level=info msg="Container 774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:45.105959 containerd[1597]: time="2025-09-12T17:47:45.105901010Z" level=info msg="CreateContainer within sandbox \"79867c1623e6a0150e82f4e1473287ffa237b58b13d6552baacdfea60ad1b6c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537\"" Sep 12 17:47:45.108906 containerd[1597]: time="2025-09-12T17:47:45.108817248Z" level=info msg="StartContainer for \"774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537\"" Sep 12 17:47:45.110524 containerd[1597]: time="2025-09-12T17:47:45.110437434Z" level=info msg="connecting to shim 774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537" address="unix:///run/containerd/s/7238c6c19d8c1e2cc4f6e337b5b44edf07c83050426a8a61af1fc6f95f1e7fe8" protocol=ttrpc version=3 Sep 12 17:47:45.142021 systemd[1]: Started cri-containerd-774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537.scope - libcontainer container 774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537. Sep 12 17:47:45.161809 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 52296 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:45.164388 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:45.171405 systemd-logind[1585]: New session 14 of user core. Sep 12 17:47:45.181039 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:47:45.291790 containerd[1597]: time="2025-09-12T17:47:45.291741311Z" level=info msg="StartContainer for \"774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537\" returns successfully" Sep 12 17:47:45.457294 sshd[5447]: Connection closed by 10.0.0.1 port 52296 Sep 12 17:47:45.457755 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:45.466334 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:52296.service: Deactivated successfully. Sep 12 17:47:45.469965 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:47:45.472478 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:47:45.474574 systemd-logind[1585]: Removed session 14. Sep 12 17:47:46.143630 kubelet[2754]: I0912 17:47:46.143554 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7996b8f8fb-fs4k5" podStartSLOduration=30.526497297 podStartE2EDuration="49.14353553s" podCreationTimestamp="2025-09-12 17:46:57 +0000 UTC" firstStartedPulling="2025-09-12 17:47:26.445671605 +0000 UTC m=+49.623294578" lastFinishedPulling="2025-09-12 17:47:45.062709838 +0000 UTC m=+68.240332811" observedRunningTime="2025-09-12 17:47:46.141718817 +0000 UTC m=+69.319341790" watchObservedRunningTime="2025-09-12 17:47:46.14353553 +0000 UTC m=+69.321158503" Sep 12 17:47:46.189199 containerd[1597]: time="2025-09-12T17:47:46.189154127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537\" id:\"2627d23ae6a941e30fdfe45532784706ca031f28744e125342073579b491aef5\" pid:5496 exited_at:{seconds:1757699266 nanos:188800876}" Sep 12 17:47:47.457129 containerd[1597]: time="2025-09-12T17:47:47.457051244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:47.483874 containerd[1597]: time="2025-09-12T17:47:47.483804558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:47:47.497704 containerd[1597]: time="2025-09-12T17:47:47.497657679Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:47.549420 containerd[1597]: time="2025-09-12T17:47:47.526154642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:47:47.549509 containerd[1597]: time="2025-09-12T17:47:47.526915348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.463661259s" Sep 12 17:47:47.549553 containerd[1597]: time="2025-09-12T17:47:47.549530930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:47:47.604839 containerd[1597]: time="2025-09-12T17:47:47.604802679Z" level=info msg="CreateContainer within sandbox \"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:47:47.720006 containerd[1597]: time="2025-09-12T17:47:47.719870956Z" level=info msg="Container 326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:47:47.853407 containerd[1597]: time="2025-09-12T17:47:47.853347738Z" level=info msg="CreateContainer within sandbox \"82b01c65efb30653357b28270817f97a3a7d6dc6044537ae12dd013896fa37b9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d\"" Sep 12 17:47:47.853993 containerd[1597]: time="2025-09-12T17:47:47.853958024Z" level=info msg="StartContainer for \"326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d\"" Sep 12 17:47:47.855882 containerd[1597]: time="2025-09-12T17:47:47.855840180Z" level=info msg="connecting to shim 326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d" address="unix:///run/containerd/s/dae33eedaafe268db24187a6008dc21998e89dbfc9205dfcbc9d739fab6ab123" protocol=ttrpc version=3 Sep 12 17:47:47.881906 systemd[1]: Started cri-containerd-326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d.scope - libcontainer container 326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d. Sep 12 17:47:48.197276 containerd[1597]: time="2025-09-12T17:47:48.197227056Z" level=info msg="StartContainer for \"326a7d495760ad3b667e221547d38d2762c4df3ff995f8d8a4829c83a14e3f1d\" returns successfully" Sep 12 17:47:48.988584 kubelet[2754]: I0912 17:47:48.988528 2754 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:47:49.022101 kubelet[2754]: I0912 17:47:49.022054 2754 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:47:49.227510 kubelet[2754]: I0912 17:47:49.227435 2754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9vzb4" podStartSLOduration=26.12714049 podStartE2EDuration="52.227400351s" podCreationTimestamp="2025-09-12 17:46:57 +0000 UTC" firstStartedPulling="2025-09-12 17:47:21.449984904 +0000 UTC m=+44.627607877" lastFinishedPulling="2025-09-12 17:47:47.550244765 +0000 UTC m=+70.727867738" observedRunningTime="2025-09-12 17:47:49.22710849 +0000 UTC m=+72.404731453" watchObservedRunningTime="2025-09-12 17:47:49.227400351 +0000 UTC m=+72.405023324" Sep 12 17:47:50.473983 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:43456.service - OpenSSH per-connection server daemon (10.0.0.1:43456). Sep 12 17:47:50.550833 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 43456 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:50.553036 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:50.558365 systemd-logind[1585]: New session 15 of user core. Sep 12 17:47:50.567895 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:47:50.753715 sshd[5556]: Connection closed by 10.0.0.1 port 43456 Sep 12 17:47:50.753989 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:50.760331 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:43456.service: Deactivated successfully. Sep 12 17:47:50.762479 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:47:50.763386 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:47:50.764546 systemd-logind[1585]: Removed session 15. Sep 12 17:47:51.151786 containerd[1597]: time="2025-09-12T17:47:51.151697181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" id:\"cac7c0cb4154490936e4af99178c00975e5f7db2387f6fea14af09989d2ccadd\" pid:5582 exit_status:1 exited_at:{seconds:1757699271 nanos:151208232}" Sep 12 17:47:55.766351 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:43470.service - OpenSSH per-connection server daemon (10.0.0.1:43470). Sep 12 17:47:55.824229 sshd[5596]: Accepted publickey for core from 10.0.0.1 port 43470 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:47:55.825777 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:47:55.830786 systemd-logind[1585]: New session 16 of user core. Sep 12 17:47:55.846908 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:47:55.975994 sshd[5599]: Connection closed by 10.0.0.1 port 43470 Sep 12 17:47:55.976411 sshd-session[5596]: pam_unix(sshd:session): session closed for user core Sep 12 17:47:55.980342 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:43470.service: Deactivated successfully. Sep 12 17:47:55.983375 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:47:55.985984 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:47:55.988349 systemd-logind[1585]: Removed session 16. Sep 12 17:47:56.907223 kubelet[2754]: E0912 17:47:56.907049 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:47:58.906689 kubelet[2754]: E0912 17:47:58.906612 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:00.907758 kubelet[2754]: E0912 17:48:00.906539 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:00.994191 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:37624.service - OpenSSH per-connection server daemon (10.0.0.1:37624). Sep 12 17:48:01.087140 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 37624 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:01.089667 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:01.101344 systemd-logind[1585]: New session 17 of user core. Sep 12 17:48:01.106903 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:48:01.253495 containerd[1597]: time="2025-09-12T17:48:01.253316263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\" id:\"ca28858c5ef182c248fedf34cd045a099f108ab5289cd168129e372329f23566\" pid:5636 exited_at:{seconds:1757699281 nanos:252864470}" Sep 12 17:48:01.424602 sshd[5637]: Connection closed by 10.0.0.1 port 37624 Sep 12 17:48:01.425126 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:01.432444 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:37624.service: Deactivated successfully. Sep 12 17:48:01.435212 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:48:01.436234 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:48:01.438311 systemd-logind[1585]: Removed session 17. Sep 12 17:48:06.445929 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:37636.service - OpenSSH per-connection server daemon (10.0.0.1:37636). Sep 12 17:48:06.499675 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 37636 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:06.501715 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:06.506440 systemd-logind[1585]: New session 18 of user core. Sep 12 17:48:06.515878 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:48:06.639935 sshd[5667]: Connection closed by 10.0.0.1 port 37636 Sep 12 17:48:06.640392 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:06.650920 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:37636.service: Deactivated successfully. Sep 12 17:48:06.653130 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:48:06.653922 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:48:06.657819 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:37652.service - OpenSSH per-connection server daemon (10.0.0.1:37652). Sep 12 17:48:06.659328 systemd-logind[1585]: Removed session 18. Sep 12 17:48:06.718131 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 37652 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:06.719963 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:06.725322 systemd-logind[1585]: New session 19 of user core. Sep 12 17:48:06.736062 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:48:07.623073 sshd[5683]: Connection closed by 10.0.0.1 port 37652 Sep 12 17:48:07.623583 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:07.634761 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:37652.service: Deactivated successfully. Sep 12 17:48:07.637245 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:48:07.638200 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:48:07.641698 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:37664.service - OpenSSH per-connection server daemon (10.0.0.1:37664). Sep 12 17:48:07.642969 systemd-logind[1585]: Removed session 19. Sep 12 17:48:07.710498 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 37664 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:07.712142 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:07.717047 systemd-logind[1585]: New session 20 of user core. Sep 12 17:48:07.726884 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:48:08.570925 sshd[5699]: Connection closed by 10.0.0.1 port 37664 Sep 12 17:48:08.571974 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:08.583275 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:37664.service: Deactivated successfully. Sep 12 17:48:08.585433 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:48:08.586370 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:48:08.591575 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:37674.service - OpenSSH per-connection server daemon (10.0.0.1:37674). Sep 12 17:48:08.593630 systemd-logind[1585]: Removed session 20. Sep 12 17:48:08.658787 sshd[5719]: Accepted publickey for core from 10.0.0.1 port 37674 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:08.660624 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:08.665850 systemd-logind[1585]: New session 21 of user core. Sep 12 17:48:08.673880 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:48:08.964425 sshd[5722]: Connection closed by 10.0.0.1 port 37674 Sep 12 17:48:08.964926 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:08.977864 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:37674.service: Deactivated successfully. Sep 12 17:48:08.981459 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:48:08.983957 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:48:08.986697 systemd-logind[1585]: Removed session 21. Sep 12 17:48:08.988556 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:37678.service - OpenSSH per-connection server daemon (10.0.0.1:37678). Sep 12 17:48:09.053347 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 37678 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:09.055357 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:09.061365 systemd-logind[1585]: New session 22 of user core. Sep 12 17:48:09.072911 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:48:09.180441 sshd[5737]: Connection closed by 10.0.0.1 port 37678 Sep 12 17:48:09.180866 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:09.185138 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:37678.service: Deactivated successfully. Sep 12 17:48:09.187399 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:48:09.188411 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:48:09.189600 systemd-logind[1585]: Removed session 22. Sep 12 17:48:09.907552 kubelet[2754]: E0912 17:48:09.907137 2754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:48:14.198874 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:52898.service - OpenSSH per-connection server daemon (10.0.0.1:52898). Sep 12 17:48:14.278629 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 52898 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:14.280453 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:14.285427 systemd-logind[1585]: New session 23 of user core. Sep 12 17:48:14.290894 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:48:14.455459 sshd[5753]: Connection closed by 10.0.0.1 port 52898 Sep 12 17:48:14.455782 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:14.460370 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:52898.service: Deactivated successfully. Sep 12 17:48:14.462575 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:48:14.463420 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:48:14.464654 systemd-logind[1585]: Removed session 23. Sep 12 17:48:16.250098 containerd[1597]: time="2025-09-12T17:48:16.249981260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"774f0a12621b423a1786e3dc3f22cc22b5d75560372cf7b3880cd97691d0d537\" id:\"6830b0343219832f7c8d552dc72d56085f0735dc16d553baa86ef5d7806d7033\" pid:5780 exited_at:{seconds:1757699296 nanos:249721938}" Sep 12 17:48:19.469137 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:52910.service - OpenSSH per-connection server daemon (10.0.0.1:52910). Sep 12 17:48:19.550462 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 52910 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:19.552136 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:19.556936 systemd-logind[1585]: New session 24 of user core. Sep 12 17:48:19.565871 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:48:19.681363 sshd[5796]: Connection closed by 10.0.0.1 port 52910 Sep 12 17:48:19.681783 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:19.685227 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:52910.service: Deactivated successfully. Sep 12 17:48:19.687327 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:48:19.689081 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:48:19.690531 systemd-logind[1585]: Removed session 24. Sep 12 17:48:20.086326 containerd[1597]: time="2025-09-12T17:48:20.086256049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\" id:\"23bfeddcb0650a4b1e45c44388f30f85846807cecea5bab83ccb84a625ef452b\" pid:5820 exited_at:{seconds:1757699300 nanos:85859387}" Sep 12 17:48:21.123344 containerd[1597]: time="2025-09-12T17:48:21.123280357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f97ba78818e315a216b6987a99114174d5fb63de18f091ffe353aee54405164\" id:\"895a2ec2e2891c786194aac8dc5ae528e7d3536071d4b7d4cf5bee20711d5203\" pid:5843 exited_at:{seconds:1757699301 nanos:122859349}" Sep 12 17:48:24.695319 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:57402.service - OpenSSH per-connection server daemon (10.0.0.1:57402). Sep 12 17:48:24.762825 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 57402 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:24.765001 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:24.770244 systemd-logind[1585]: New session 25 of user core. Sep 12 17:48:24.776005 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:48:24.950583 sshd[5860]: Connection closed by 10.0.0.1 port 57402 Sep 12 17:48:24.951036 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:24.956459 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:57402.service: Deactivated successfully. Sep 12 17:48:24.958792 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:48:24.959566 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:48:24.960950 systemd-logind[1585]: Removed session 25. Sep 12 17:48:29.964517 systemd[1]: Started sshd@25-10.0.0.93:22-10.0.0.1:35106.service - OpenSSH per-connection server daemon (10.0.0.1:35106). Sep 12 17:48:30.085418 sshd[5875]: Accepted publickey for core from 10.0.0.1 port 35106 ssh2: RSA SHA256:fiC/i3IODFTUvy597QlN9UclswHBzEHPUbvMhtWvcQE Sep 12 17:48:30.087152 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:30.091820 systemd-logind[1585]: New session 26 of user core. Sep 12 17:48:30.103913 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:48:30.476756 sshd[5878]: Connection closed by 10.0.0.1 port 35106 Sep 12 17:48:30.477398 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:30.485361 systemd[1]: sshd@25-10.0.0.93:22-10.0.0.1:35106.service: Deactivated successfully. Sep 12 17:48:30.491236 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:48:30.494665 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:48:30.498140 systemd-logind[1585]: Removed session 26. Sep 12 17:48:31.178670 containerd[1597]: time="2025-09-12T17:48:31.178608839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a8e04c34fe4b479b9612fd33fc039239fa36aceeaa732dab018e46c845d322a\" id:\"92f551ad9f2f2ca516764fa6ba1a642b5f0c6df2bf9cb27fd834fc5f11cc7493\" pid:5902 exited_at:{seconds:1757699311 nanos:177935474}"