Jul 10 00:20:54.947195 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:20:54.947304 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:54.947332 kernel: BIOS-provided physical RAM map: Jul 10 00:20:54.947343 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:20:54.947356 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:20:54.947367 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:20:54.947380 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:20:54.947391 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 10 00:20:54.947405 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:20:54.947416 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:20:54.947427 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:20:54.947441 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:20:54.947450 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:20:54.947459 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:20:54.947469 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:20:54.947479 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:20:54.947497 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:20:54.947506 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:20:54.947515 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:20:54.947525 kernel: NX (Execute Disable) protection: active Jul 10 00:20:54.947534 kernel: APIC: Static calls initialized Jul 10 00:20:54.947543 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Jul 10 00:20:54.947553 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Jul 10 00:20:54.947562 kernel: extended physical RAM map: Jul 10 00:20:54.947571 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 10 00:20:54.947580 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 10 00:20:54.947590 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 10 00:20:54.947607 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 10 00:20:54.947616 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Jul 10 00:20:54.947625 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Jul 10 00:20:54.947634 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Jul 10 00:20:54.947643 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Jul 10 00:20:54.947652 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Jul 10 00:20:54.947662 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 10 00:20:54.947671 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 10 00:20:54.947680 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 10 00:20:54.947692 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 10 00:20:54.947701 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 10 00:20:54.947713 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 10 00:20:54.947725 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 10 00:20:54.947739 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 10 00:20:54.947749 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 10 00:20:54.947758 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 10 00:20:54.947768 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 10 00:20:54.947783 kernel: efi: EFI v2.7 by EDK II Jul 10 00:20:54.947796 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 10 00:20:54.947806 kernel: random: crng init done Jul 10 00:20:54.947815 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 10 00:20:54.947825 kernel: secureboot: Secure boot enabled Jul 10 00:20:54.947835 kernel: SMBIOS 2.8 present. Jul 10 00:20:54.947847 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 10 00:20:54.947857 kernel: DMI: Memory slots populated: 1/1 Jul 10 00:20:54.947866 kernel: Hypervisor detected: KVM Jul 10 00:20:54.947876 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 10 00:20:54.947886 kernel: kvm-clock: using sched offset of 8040375192 cycles Jul 10 00:20:54.947901 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 10 00:20:54.947912 kernel: tsc: Detected 2794.748 MHz processor Jul 10 00:20:54.947922 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:20:54.947932 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:20:54.947943 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 10 00:20:54.947953 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:20:54.947978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:20:54.947988 kernel: Using GB pages for direct mapping Jul 10 00:20:54.948003 kernel: ACPI: Early table checksum verification disabled Jul 10 00:20:54.948019 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 10 00:20:54.948029 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:20:54.948039 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948050 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948062 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 10 00:20:54.948072 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948082 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948098 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948108 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:20:54.948128 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 10 00:20:54.948155 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 10 00:20:54.948179 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 10 00:20:54.948189 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 10 00:20:54.948199 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 10 00:20:54.948209 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 10 00:20:54.948219 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 10 00:20:54.948229 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 10 00:20:54.948240 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 10 00:20:54.948259 kernel: No NUMA configuration found Jul 10 00:20:54.948272 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 10 00:20:54.948297 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 10 00:20:54.948307 kernel: Zone ranges: Jul 10 00:20:54.948317 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:20:54.948327 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 10 00:20:54.948337 kernel: Normal empty Jul 10 00:20:54.948346 kernel: Device empty Jul 10 00:20:54.948356 kernel: Movable zone start for each node Jul 10 00:20:54.948372 kernel: Early memory node ranges Jul 10 00:20:54.948382 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 10 00:20:54.948392 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 10 00:20:54.948402 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 10 00:20:54.948412 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 10 00:20:54.948422 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 10 00:20:54.948432 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 10 00:20:54.948442 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:20:54.948452 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 10 00:20:54.948465 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 10 00:20:54.948475 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 10 00:20:54.948485 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 10 00:20:54.948495 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 10 00:20:54.948505 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 10 00:20:54.948515 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 10 00:20:54.948536 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:20:54.948557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 10 00:20:54.948568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 10 00:20:54.948587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:20:54.948597 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 10 00:20:54.948607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 10 00:20:54.948617 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:20:54.948627 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 10 00:20:54.948637 kernel: TSC deadline timer available Jul 10 00:20:54.948647 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:20:54.948657 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:20:54.948667 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:20:54.948690 kernel: CPU topo: Max. threads per core: 1 Jul 10 00:20:54.948701 kernel: CPU topo: Num. cores per package: 4 Jul 10 00:20:54.948711 kernel: CPU topo: Num. threads per package: 4 Jul 10 00:20:54.948721 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 10 00:20:54.948737 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 10 00:20:54.948748 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 10 00:20:54.948758 kernel: kvm-guest: setup PV sched yield Jul 10 00:20:54.948769 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 10 00:20:54.948784 kernel: Booting paravirtualized kernel on KVM Jul 10 00:20:54.948795 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:20:54.948806 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 10 00:20:54.948816 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 10 00:20:54.948827 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 10 00:20:54.948837 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 10 00:20:54.948847 kernel: kvm-guest: PV spinlocks enabled Jul 10 00:20:54.948858 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:20:54.948869 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:54.948883 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:20:54.948894 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:20:54.948904 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:20:54.948914 kernel: Fallback order for Node 0: 0 Jul 10 00:20:54.948925 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 10 00:20:54.948935 kernel: Policy zone: DMA32 Jul 10 00:20:54.948946 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:20:54.948957 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:20:54.948982 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:20:54.948993 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:20:54.949003 kernel: Dynamic Preempt: voluntary Jul 10 00:20:54.949014 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:20:54.949025 kernel: rcu: RCU event tracing is enabled. Jul 10 00:20:54.949036 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:20:54.949046 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:20:54.949057 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:20:54.949068 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:20:54.949078 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:20:54.949094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:20:54.949105 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:20:54.949115 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:20:54.949130 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:20:54.949143 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 10 00:20:54.949156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:20:54.949169 kernel: Console: colour dummy device 80x25 Jul 10 00:20:54.949182 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:20:54.949195 kernel: ACPI: Core revision 20240827 Jul 10 00:20:54.949211 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 10 00:20:54.949224 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:20:54.949237 kernel: x2apic enabled Jul 10 00:20:54.949250 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:20:54.949262 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 10 00:20:54.949295 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 10 00:20:54.949306 kernel: kvm-guest: setup PV IPIs Jul 10 00:20:54.949316 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 00:20:54.949327 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:20:54.949341 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 10 00:20:54.949351 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:20:54.949361 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 10 00:20:54.949372 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 10 00:20:54.949385 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:20:54.949396 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:20:54.949406 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:20:54.949417 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 10 00:20:54.949430 kernel: RETBleed: Mitigation: untrained return thunk Jul 10 00:20:54.949440 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 00:20:54.949451 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 10 00:20:54.949461 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 10 00:20:54.949472 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 10 00:20:54.949483 kernel: x86/bugs: return thunk changed Jul 10 00:20:54.949493 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 10 00:20:54.949503 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:20:54.949514 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:20:54.949529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:20:54.949540 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:20:54.949550 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 00:20:54.949560 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:20:54.949571 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:20:54.949581 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:20:54.949591 kernel: landlock: Up and running. Jul 10 00:20:54.949601 kernel: SELinux: Initializing. Jul 10 00:20:54.949612 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:20:54.949628 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:20:54.949638 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 10 00:20:54.949649 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 10 00:20:54.949659 kernel: ... version: 0 Jul 10 00:20:54.949669 kernel: ... bit width: 48 Jul 10 00:20:54.949682 kernel: ... generic registers: 6 Jul 10 00:20:54.949693 kernel: ... value mask: 0000ffffffffffff Jul 10 00:20:54.949703 kernel: ... max period: 00007fffffffffff Jul 10 00:20:54.949713 kernel: ... fixed-purpose events: 0 Jul 10 00:20:54.949726 kernel: ... event mask: 000000000000003f Jul 10 00:20:54.949736 kernel: signal: max sigframe size: 1776 Jul 10 00:20:54.949747 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:20:54.949757 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:20:54.949768 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:20:54.949778 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:20:54.949788 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:20:54.949798 kernel: .... node #0, CPUs: #1 #2 #3 Jul 10 00:20:54.949809 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:20:54.949822 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 10 00:20:54.949833 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 137068K reserved, 0K cma-reserved) Jul 10 00:20:54.949843 kernel: devtmpfs: initialized Jul 10 00:20:54.949854 kernel: x86/mm: Memory block size: 128MB Jul 10 00:20:54.949864 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 10 00:20:54.949875 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 10 00:20:54.949885 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:20:54.949896 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:20:54.949906 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:20:54.949919 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:20:54.949929 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:20:54.949940 kernel: audit: type=2000 audit(1752106850.988:1): state=initialized audit_enabled=0 res=1 Jul 10 00:20:54.949972 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:20:54.949984 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:20:54.950006 kernel: cpuidle: using governor menu Jul 10 00:20:54.950027 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:20:54.950038 kernel: dca service started, version 1.12.1 Jul 10 00:20:54.950053 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 10 00:20:54.950063 kernel: PCI: Using configuration type 1 for base access Jul 10 00:20:54.950074 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:20:54.950084 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:20:54.950101 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:20:54.950111 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:20:54.950122 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:20:54.950133 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:20:54.950143 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:20:54.950157 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:20:54.950167 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:20:54.950177 kernel: ACPI: Interpreter enabled Jul 10 00:20:54.950188 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:20:54.950198 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:20:54.950209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:20:54.950220 kernel: PCI: Using E820 reservations for host bridge windows Jul 10 00:20:54.950233 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 10 00:20:54.950246 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:20:54.950664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:20:54.950814 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 10 00:20:54.950956 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 10 00:20:54.950981 kernel: PCI host bridge to bus 0000:00 Jul 10 00:20:54.951183 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 00:20:54.951345 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 10 00:20:54.951508 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 00:20:54.951641 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 10 00:20:54.951766 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 10 00:20:54.951892 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:20:54.952033 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:20:54.952256 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 10 00:20:54.952459 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 10 00:20:54.952607 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 10 00:20:54.952748 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 10 00:20:54.952885 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 10 00:20:54.953040 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 00:20:54.953210 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 00:20:54.953382 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 10 00:20:54.953536 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 10 00:20:54.953726 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 10 00:20:54.953889 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 10 00:20:54.954051 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 10 00:20:54.954199 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 10 00:20:54.954381 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 10 00:20:54.954549 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 10 00:20:54.954716 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 10 00:20:54.954859 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 10 00:20:54.955012 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 10 00:20:54.955154 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 10 00:20:54.955346 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 10 00:20:54.955492 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 10 00:20:54.955653 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 10 00:20:54.955801 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 10 00:20:54.955940 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 10 00:20:54.956140 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 10 00:20:54.956328 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 10 00:20:54.956347 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 10 00:20:54.956360 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 10 00:20:54.956373 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 00:20:54.956387 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 10 00:20:54.956410 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 10 00:20:54.956423 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 10 00:20:54.956436 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 10 00:20:54.956448 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 10 00:20:54.956458 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 10 00:20:54.956469 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 10 00:20:54.956479 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 10 00:20:54.956490 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 10 00:20:54.956500 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 10 00:20:54.956516 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 10 00:20:54.956527 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 10 00:20:54.956537 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 10 00:20:54.956548 kernel: iommu: Default domain type: Translated Jul 10 00:20:54.956558 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:20:54.956569 kernel: efivars: Registered efivars operations Jul 10 00:20:54.956579 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:20:54.956589 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 00:20:54.956600 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 10 00:20:54.956616 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Jul 10 00:20:54.956626 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Jul 10 00:20:54.956636 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 10 00:20:54.956647 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 10 00:20:54.956790 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 10 00:20:54.956930 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 10 00:20:54.957086 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 00:20:54.957101 kernel: vgaarb: loaded Jul 10 00:20:54.957117 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 10 00:20:54.957127 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 10 00:20:54.957138 kernel: clocksource: Switched to clocksource kvm-clock Jul 10 00:20:54.957148 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:20:54.957159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:20:54.957169 kernel: pnp: PnP ACPI init Jul 10 00:20:54.957458 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 10 00:20:54.957475 kernel: pnp: PnP ACPI: found 6 devices Jul 10 00:20:54.957486 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:20:54.957501 kernel: NET: Registered PF_INET protocol family Jul 10 00:20:54.957512 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:20:54.957523 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:20:54.957534 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:20:54.957544 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:20:54.957555 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:20:54.957565 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:20:54.957575 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:20:54.957588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:20:54.957599 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:20:54.957610 kernel: NET: Registered PF_XDP protocol family Jul 10 00:20:54.957754 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 10 00:20:54.957898 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 10 00:20:54.958044 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 10 00:20:54.958182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 10 00:20:54.958390 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 10 00:20:54.958527 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 10 00:20:54.958653 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 10 00:20:54.958778 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 10 00:20:54.958792 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:20:54.958803 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 10 00:20:54.958814 kernel: Initialise system trusted keyrings Jul 10 00:20:54.958824 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:20:54.958835 kernel: Key type asymmetric registered Jul 10 00:20:54.958845 kernel: Asymmetric key parser 'x509' registered Jul 10 00:20:54.958860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:20:54.958893 kernel: io scheduler mq-deadline registered Jul 10 00:20:54.958907 kernel: io scheduler kyber registered Jul 10 00:20:54.958918 kernel: io scheduler bfq registered Jul 10 00:20:54.958929 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:20:54.958941 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 10 00:20:54.958953 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 10 00:20:54.958974 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 10 00:20:54.958985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:20:54.959000 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:20:54.959011 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 10 00:20:54.959021 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 00:20:54.959032 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 00:20:54.959043 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 00:20:54.959210 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 10 00:20:54.959469 kernel: rtc_cmos 00:04: registered as rtc0 Jul 10 00:20:54.959604 kernel: rtc_cmos 00:04: setting system clock to 2025-07-10T00:20:54 UTC (1752106854) Jul 10 00:20:54.959741 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 10 00:20:54.959755 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 10 00:20:54.959767 kernel: efifb: probing for efifb Jul 10 00:20:54.959778 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 10 00:20:54.959789 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 10 00:20:54.959799 kernel: efifb: scrolling: redraw Jul 10 00:20:54.959811 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:20:54.959822 kernel: Console: switching to colour frame buffer device 160x50 Jul 10 00:20:54.959833 kernel: fb0: EFI VGA frame buffer device Jul 10 00:20:54.959847 kernel: pstore: Using crash dump compression: deflate Jul 10 00:20:54.959858 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:20:54.959872 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:20:54.959883 kernel: Segment Routing with IPv6 Jul 10 00:20:54.959894 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:20:54.959905 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:20:54.959918 kernel: Key type dns_resolver registered Jul 10 00:20:54.959929 kernel: IPI shorthand broadcast: enabled Jul 10 00:20:54.959940 kernel: sched_clock: Marking stable (4434004556, 186534320)->(4672958265, -52419389) Jul 10 00:20:54.959952 kernel: registered taskstats version 1 Jul 10 00:20:54.959973 kernel: Loading compiled-in X.509 certificates Jul 10 00:20:54.959984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:20:54.959995 kernel: Demotion targets for Node 0: null Jul 10 00:20:54.960009 kernel: Key type .fscrypt registered Jul 10 00:20:54.960019 kernel: Key type fscrypt-provisioning registered Jul 10 00:20:54.960033 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:20:54.960044 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:20:54.960055 kernel: ima: No architecture policies found Jul 10 00:20:54.960066 kernel: clk: Disabling unused clocks Jul 10 00:20:54.960077 kernel: Warning: unable to open an initial console. Jul 10 00:20:54.960088 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:20:54.960099 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:20:54.960110 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:20:54.960124 kernel: Run /init as init process Jul 10 00:20:54.960135 kernel: with arguments: Jul 10 00:20:54.960145 kernel: /init Jul 10 00:20:54.960156 kernel: with environment: Jul 10 00:20:54.960167 kernel: HOME=/ Jul 10 00:20:54.960178 kernel: TERM=linux Jul 10 00:20:54.960189 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:20:54.960200 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:20:54.960218 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:20:54.960230 systemd[1]: Detected virtualization kvm. Jul 10 00:20:54.960243 systemd[1]: Detected architecture x86-64. Jul 10 00:20:54.960256 systemd[1]: Running in initrd. Jul 10 00:20:54.960267 systemd[1]: No hostname configured, using default hostname. Jul 10 00:20:54.960300 systemd[1]: Hostname set to . Jul 10 00:20:54.960311 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:20:54.960323 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:20:54.960338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:20:54.960350 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:20:54.960362 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:20:54.960374 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:20:54.960385 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:20:54.960398 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:20:54.960411 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:20:54.960426 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:20:54.960437 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:20:54.960449 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:20:54.960460 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:20:54.960471 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:20:54.960483 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:20:54.960494 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:20:54.960506 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:20:54.960521 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:20:54.960532 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:20:54.960543 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:20:54.960555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:20:54.960567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:20:54.960578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:20:54.960589 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:20:54.960601 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:20:54.960612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:20:54.960627 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:20:54.960639 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:20:54.960651 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:20:54.960662 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:20:54.960674 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:20:54.960686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:54.960697 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:20:54.960712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:20:54.960724 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:20:54.960783 systemd-journald[220]: Collecting audit messages is disabled. Jul 10 00:20:54.960815 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:20:54.960827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:54.960839 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:20:54.960851 systemd-journald[220]: Journal started Jul 10 00:20:54.960879 systemd-journald[220]: Runtime Journal (/run/log/journal/2226b80a086a436782cc4c50382c3cfb) is 6M, max 48.2M, 42.2M free. Jul 10 00:20:54.948657 systemd-modules-load[223]: Inserted module 'overlay' Jul 10 00:20:54.965562 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:20:54.967972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:20:54.979318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:20:54.981303 kernel: Bridge firewalling registered Jul 10 00:20:54.981301 systemd-modules-load[223]: Inserted module 'br_netfilter' Jul 10 00:20:54.985231 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:20:54.986596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:20:54.987225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:20:54.989198 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:20:55.013744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:20:55.017081 systemd-tmpfiles[240]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:20:55.018338 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:20:55.022388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:20:55.034470 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:20:55.035265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:20:55.041135 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:20:55.050687 dracut-cmdline[256]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:20:55.118345 systemd-resolved[265]: Positive Trust Anchors: Jul 10 00:20:55.118380 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:20:55.118422 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:20:55.122122 systemd-resolved[265]: Defaulting to hostname 'linux'. Jul 10 00:20:55.123843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:20:55.127409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:20:55.202345 kernel: SCSI subsystem initialized Jul 10 00:20:55.212322 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:20:55.224319 kernel: iscsi: registered transport (tcp) Jul 10 00:20:55.254308 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:20:55.254347 kernel: QLogic iSCSI HBA Driver Jul 10 00:20:55.279807 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:20:55.297100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:20:55.299457 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:20:55.365553 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:20:55.367740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:20:55.432336 kernel: raid6: avx2x4 gen() 27531 MB/s Jul 10 00:20:55.449322 kernel: raid6: avx2x2 gen() 28863 MB/s Jul 10 00:20:55.466411 kernel: raid6: avx2x1 gen() 23022 MB/s Jul 10 00:20:55.466485 kernel: raid6: using algorithm avx2x2 gen() 28863 MB/s Jul 10 00:20:55.484420 kernel: raid6: .... xor() 18905 MB/s, rmw enabled Jul 10 00:20:55.484459 kernel: raid6: using avx2x2 recovery algorithm Jul 10 00:20:55.510488 kernel: xor: automatically using best checksumming function avx Jul 10 00:20:55.684372 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:20:55.696820 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:20:55.699690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:20:55.739860 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 10 00:20:55.747000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:20:55.752102 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:20:55.790751 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 10 00:20:55.826251 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:20:55.828453 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:20:55.913366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:20:55.915456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:20:55.956353 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 10 00:20:55.959486 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:20:55.971026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:20:55.971053 kernel: GPT:9289727 != 19775487 Jul 10 00:20:55.971064 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:20:55.971074 kernel: GPT:9289727 != 19775487 Jul 10 00:20:55.971084 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:20:55.971094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:20:55.974311 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:20:55.983356 kernel: libata version 3.00 loaded. Jul 10 00:20:56.002334 kernel: ahci 0000:00:1f.2: version 3.0 Jul 10 00:20:56.002653 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 10 00:20:55.998503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:20:55.998684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:56.037576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:56.056150 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 10 00:20:56.056373 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 10 00:20:56.060521 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 10 00:20:56.065638 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 10 00:20:56.065652 kernel: AES CTR mode by8 optimization enabled Jul 10 00:20:56.058011 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:56.062895 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:20:56.080320 kernel: scsi host0: ahci Jul 10 00:20:56.084330 kernel: scsi host1: ahci Jul 10 00:20:56.087754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:20:56.091208 kernel: scsi host2: ahci Jul 10 00:20:56.091571 kernel: scsi host3: ahci Jul 10 00:20:56.091727 kernel: scsi host4: ahci Jul 10 00:20:56.091874 kernel: scsi host5: ahci Jul 10 00:20:56.092042 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 10 00:20:56.097161 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 10 00:20:56.097190 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 10 00:20:56.097204 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 10 00:20:56.097217 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 10 00:20:56.099999 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 10 00:20:56.111310 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:20:56.136033 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:20:56.137508 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:20:56.150379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:20:56.153670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:20:56.156060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:20:56.156203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:56.159573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:56.163405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:20:56.164963 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:20:56.207671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:20:56.375447 disk-uuid[627]: Primary Header is updated. Jul 10 00:20:56.375447 disk-uuid[627]: Secondary Entries is updated. Jul 10 00:20:56.375447 disk-uuid[627]: Secondary Header is updated. Jul 10 00:20:56.381318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:20:56.387325 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:20:56.408778 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 10 00:20:56.408856 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 10 00:20:56.414087 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 10 00:20:56.414154 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 10 00:20:56.414176 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 10 00:20:56.414190 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 10 00:20:56.415303 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 10 00:20:56.416844 kernel: ata3.00: applying bridge limits Jul 10 00:20:56.417685 kernel: ata3.00: configured for UDMA/100 Jul 10 00:20:56.420408 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 10 00:20:56.470796 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 10 00:20:56.471091 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:20:56.481314 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 00:20:56.861315 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:20:57.011221 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:20:57.013034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:20:57.013636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:20:57.015191 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:20:57.054974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:20:57.387340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:20:57.388183 disk-uuid[632]: The operation has completed successfully. Jul 10 00:20:57.425334 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:20:57.425484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:20:57.460346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:20:57.489371 sh[668]: Success Jul 10 00:20:57.511242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:20:57.511312 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:20:57.511341 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:20:57.521300 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:20:57.560695 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:20:57.564360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:20:57.589192 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:20:57.596513 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:20:57.596563 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (680) Jul 10 00:20:57.598009 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:20:57.598037 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:57.599503 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:20:57.605031 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:20:57.606135 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:20:57.607295 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:20:57.608366 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:20:57.610477 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:20:57.649048 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (713) Jul 10 00:20:57.649103 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:57.649121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:57.650832 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:20:57.660315 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:57.660742 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:20:57.664813 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:20:57.797440 ignition[759]: Ignition 2.21.0 Jul 10 00:20:57.797454 ignition[759]: Stage: fetch-offline Jul 10 00:20:57.797505 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:20:57.797493 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:57.797504 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:20:57.797604 ignition[759]: parsed url from cmdline: "" Jul 10 00:20:57.797610 ignition[759]: no config URL provided Jul 10 00:20:57.797617 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:20:57.797628 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:20:57.797656 ignition[759]: op(1): [started] loading QEMU firmware config module Jul 10 00:20:57.797663 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:20:57.808268 ignition[759]: op(1): [finished] loading QEMU firmware config module Jul 10 00:20:57.812102 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:20:57.850537 ignition[759]: parsing config with SHA512: 3600d3acf4dee5d96b50c98b47ae59f6d20b0e30a00a08226bb99045abf54f6adf7d7a9b9c08e7585935d7d44c95894bda73d5ac690c6b3dcc86d19cb5d354ba Jul 10 00:20:57.857398 unknown[759]: fetched base config from "system" Jul 10 00:20:57.858523 unknown[759]: fetched user config from "qemu" Jul 10 00:20:57.859684 ignition[759]: fetch-offline: fetch-offline passed Jul 10 00:20:57.860666 ignition[759]: Ignition finished successfully Jul 10 00:20:57.864561 systemd-networkd[859]: lo: Link UP Jul 10 00:20:57.864569 systemd-networkd[859]: lo: Gained carrier Jul 10 00:20:57.864993 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:20:57.867072 systemd-networkd[859]: Enumeration completed Jul 10 00:20:57.867885 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:20:57.867906 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:20:57.869519 systemd-networkd[859]: eth0: Link UP Jul 10 00:20:57.869526 systemd-networkd[859]: eth0: Gained carrier Jul 10 00:20:57.869550 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:20:57.869608 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:20:57.872238 systemd[1]: Reached target network.target - Network. Jul 10 00:20:57.875941 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:20:57.877246 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:20:57.878348 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:20:57.927659 ignition[863]: Ignition 2.21.0 Jul 10 00:20:57.927676 ignition[863]: Stage: kargs Jul 10 00:20:57.927883 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:57.927905 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:20:57.930321 ignition[863]: kargs: kargs passed Jul 10 00:20:57.930446 ignition[863]: Ignition finished successfully Jul 10 00:20:57.936911 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:20:57.940596 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:20:57.992714 ignition[872]: Ignition 2.21.0 Jul 10 00:20:57.992729 ignition[872]: Stage: disks Jul 10 00:20:57.992919 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:57.992931 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:20:57.996924 ignition[872]: disks: disks passed Jul 10 00:20:57.996988 ignition[872]: Ignition finished successfully Jul 10 00:20:58.000855 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:20:58.003227 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:20:58.005398 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:20:58.005654 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:20:58.005992 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:20:58.006483 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:20:58.013152 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:20:58.049805 systemd-resolved[265]: Detected conflict on linux IN A 10.0.0.84 Jul 10 00:20:58.049825 systemd-resolved[265]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 10 00:20:58.051672 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 00:20:58.400108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:20:58.402838 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:20:58.517306 kernel: EXT4-fs (vda9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:20:58.517763 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:20:58.519469 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:20:58.522447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:20:58.524229 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:20:58.525045 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:20:58.525107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:20:58.525142 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:20:58.544215 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:20:58.547993 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:20:58.551628 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Jul 10 00:20:58.556601 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:58.556643 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:58.556659 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:20:58.563110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:20:58.592948 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:20:58.599199 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:20:58.606111 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:20:58.610623 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:20:58.718593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:20:58.721754 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:20:58.723716 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:20:58.749225 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:20:58.750520 kernel: BTRFS info (device vda6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:58.765074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:20:58.814659 ignition[1004]: INFO : Ignition 2.21.0 Jul 10 00:20:58.814659 ignition[1004]: INFO : Stage: mount Jul 10 00:20:58.817309 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.817309 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:20:58.817309 ignition[1004]: INFO : mount: mount passed Jul 10 00:20:58.817309 ignition[1004]: INFO : Ignition finished successfully Jul 10 00:20:58.819476 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:20:58.822432 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:20:58.851370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:20:58.880419 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Jul 10 00:20:58.880505 kernel: BTRFS info (device vda6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:20:58.880517 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:20:58.881490 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 00:20:58.886181 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:20:58.923215 ignition[1035]: INFO : Ignition 2.21.0 Jul 10 00:20:58.923215 ignition[1035]: INFO : Stage: files Jul 10 00:20:58.924979 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:20:58.924979 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:20:58.927291 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:20:58.928403 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:20:58.928403 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:20:58.931217 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:20:58.931217 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:20:58.934181 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:20:58.934181 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:20:58.934181 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:20:58.931219 unknown[1035]: wrote ssh authorized keys file for user: core Jul 10 00:20:59.027956 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:20:59.228331 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:20:59.228331 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:20:59.232312 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:20:59.232312 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:20:59.232312 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:20:59.232312 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:20:59.238879 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:20:59.238879 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:20:59.242246 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:20:59.248516 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:20:59.250586 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:20:59.250586 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:20:59.255659 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:20:59.255659 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:20:59.260896 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:20:59.865608 systemd-networkd[859]: eth0: Gained IPv6LL Jul 10 00:21:00.177611 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:21:00.744496 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:21:00.744496 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:21:00.748444 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:00.754868 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:21:00.754868 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:21:00.754868 ignition[1035]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 00:21:00.754868 ignition[1035]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:21:00.761546 ignition[1035]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:21:00.761546 ignition[1035]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 00:21:00.761546 ignition[1035]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:21:00.783174 ignition[1035]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:21:00.788295 ignition[1035]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:21:00.789936 ignition[1035]: INFO : files: files passed Jul 10 00:21:00.789936 ignition[1035]: INFO : Ignition finished successfully Jul 10 00:21:00.796716 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:21:00.801035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:21:00.803907 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:21:00.820478 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:21:00.820597 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:21:00.825591 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:21:00.830627 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:00.830627 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:00.834490 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:21:00.837572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:00.841211 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:21:00.844953 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:21:00.905840 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:21:00.905972 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:21:00.908537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:21:00.908976 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:21:00.909589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:21:00.915897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:21:00.935066 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:00.941778 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:21:00.976492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:00.979126 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:00.979764 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:21:00.980129 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:21:00.980329 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:21:00.985691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:21:00.986053 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:21:00.986399 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:21:00.986890 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:21:00.987249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:21:00.987793 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:21:00.988164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:21:00.988808 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:21:00.989163 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:21:00.989642 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:21:00.989968 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:21:00.990257 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:21:00.990425 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:21:00.991141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:00.991707 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:00.991974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:21:00.992132 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:01.014757 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:21:01.015076 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:21:01.019130 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:21:01.019427 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:21:01.020082 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:21:01.022941 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:21:01.026622 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:01.030463 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:21:01.031119 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:21:01.031689 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:21:01.031917 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:21:01.034984 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:21:01.035135 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:21:01.037577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:21:01.037838 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:21:01.040535 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:21:01.040733 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:21:01.045058 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:21:01.047817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:21:01.048798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:21:01.049028 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:01.051336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:21:01.051615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:21:01.059161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:21:01.069684 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:21:01.100430 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:21:01.101746 ignition[1090]: INFO : Ignition 2.21.0 Jul 10 00:21:01.101746 ignition[1090]: INFO : Stage: umount Jul 10 00:21:01.101746 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:21:01.101746 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:21:01.106576 ignition[1090]: INFO : umount: umount passed Jul 10 00:21:01.106576 ignition[1090]: INFO : Ignition finished successfully Jul 10 00:21:01.108435 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:21:01.108609 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:21:01.110801 systemd[1]: Stopped target network.target - Network. Jul 10 00:21:01.115542 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:21:01.115693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:21:01.118635 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:21:01.118830 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:21:01.119980 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:21:01.120097 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:21:01.121207 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:21:01.121287 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:21:01.122123 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:21:01.129309 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:21:01.141351 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:21:01.141687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:21:01.147855 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:21:01.149102 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:21:01.149266 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:21:01.155248 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:21:01.156433 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:21:01.157616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:21:01.157668 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:01.159079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:21:01.162098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:21:01.162178 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:21:01.162741 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:21:01.162799 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:01.168553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:21:01.168642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:01.169586 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:21:01.169640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:01.174618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:01.176606 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:21:01.176714 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:01.222516 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:21:01.222681 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:21:01.225313 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:21:01.225493 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:01.227345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:21:01.227435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:01.228908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:21:01.228949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:01.229556 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:21:01.229623 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:21:01.236067 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:21:01.236141 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:21:01.239455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:21:01.239534 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:21:01.244180 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:21:01.244733 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:21:01.244792 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:01.249402 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:21:01.249461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:01.253158 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:21:01.253208 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:01.256861 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:21:01.256943 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:01.257773 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:01.257830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:01.264129 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:21:01.264304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 10 00:21:01.264393 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:21:01.264455 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:21:01.279309 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:21:01.279465 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:21:01.701047 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:21:01.701215 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:21:01.702238 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:21:01.704532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:21:01.704608 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:21:01.730516 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:21:01.759977 systemd[1]: Switching root. Jul 10 00:21:02.539667 systemd-journald[220]: Journal stopped Jul 10 00:21:04.483509 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 10 00:21:04.483597 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:21:04.483611 kernel: SELinux: policy capability open_perms=1 Jul 10 00:21:04.483623 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:21:04.483634 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:21:04.483650 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:21:04.483662 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:21:04.483674 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:21:04.483685 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:21:04.483697 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:21:04.483708 kernel: audit: type=1403 audit(1752106863.526:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:21:04.483738 systemd[1]: Successfully loaded SELinux policy in 57ms. Jul 10 00:21:04.483769 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.086ms. Jul 10 00:21:04.483783 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:21:04.483799 systemd[1]: Detected virtualization kvm. Jul 10 00:21:04.483811 systemd[1]: Detected architecture x86-64. Jul 10 00:21:04.483824 systemd[1]: Detected first boot. Jul 10 00:21:04.483836 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:21:04.483848 zram_generator::config[1137]: No configuration found. Jul 10 00:21:04.483865 kernel: Guest personality initialized and is inactive Jul 10 00:21:04.483877 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 00:21:04.483888 kernel: Initialized host personality Jul 10 00:21:04.483899 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:21:04.483914 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:21:04.483927 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:21:04.485200 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:21:04.485228 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:21:04.485242 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:04.485262 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:21:04.485333 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:21:04.485348 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:21:04.485366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:21:04.485378 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:21:04.485396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:21:04.485409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:21:04.485421 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:21:04.485434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:21:04.485446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:21:04.485459 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:21:04.485472 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:21:04.485487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:21:04.485500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:21:04.485514 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:21:04.485526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:21:04.485543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:21:04.485556 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:21:04.485568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:21:04.485581 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:21:04.485596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:21:04.485608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:21:04.485621 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:21:04.485633 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:21:04.485645 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:21:04.485658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:21:04.485670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:21:04.485682 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:21:04.485694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:21:04.485710 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:21:04.485735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:21:04.485748 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:21:04.485760 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:21:04.485773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:21:04.485785 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:21:04.485799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:04.485812 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:21:04.485825 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:21:04.485840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:21:04.485853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:21:04.485865 systemd[1]: Reached target machines.target - Containers. Jul 10 00:21:04.485878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:21:04.485891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:04.485903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:21:04.485916 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:21:04.485933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:04.485953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:04.485966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:04.485979 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:21:04.485991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:04.486031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:21:04.486044 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:21:04.486056 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:21:04.486068 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:21:04.486082 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:21:04.486095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:04.486108 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:21:04.486120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:21:04.486133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:21:04.486148 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:21:04.486163 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:21:04.486176 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:21:04.486188 kernel: loop: module loaded Jul 10 00:21:04.486201 kernel: ACPI: bus type drm_connector registered Jul 10 00:21:04.486213 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:21:04.486230 systemd[1]: Stopped verity-setup.service. Jul 10 00:21:04.486244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:04.486256 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:21:04.486268 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:21:04.486297 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:21:04.486309 kernel: fuse: init (API version 7.41) Jul 10 00:21:04.486328 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:21:04.486341 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:21:04.486357 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:21:04.486369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:21:04.486426 systemd-journald[1212]: Collecting audit messages is disabled. Jul 10 00:21:04.486451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:21:04.486463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:21:04.486477 systemd-journald[1212]: Journal started Jul 10 00:21:04.486506 systemd-journald[1212]: Runtime Journal (/run/log/journal/2226b80a086a436782cc4c50382c3cfb) is 6M, max 48.2M, 42.2M free. Jul 10 00:21:04.192250 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:21:04.214142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:21:04.214804 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:21:04.215406 systemd[1]: systemd-journald.service: Consumed 1.586s CPU time. Jul 10 00:21:04.487507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:21:04.490397 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:21:04.492356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:04.492636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:04.494336 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:04.494565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:04.496006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:04.496242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:04.497908 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:21:04.498147 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:21:04.499577 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:04.499840 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:04.501302 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:21:04.502949 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:21:04.504617 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:21:04.506384 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:21:04.524548 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:21:04.527889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:21:04.530454 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:21:04.531714 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:21:04.531771 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:21:04.534140 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:21:04.540056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:21:04.542549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:04.546860 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:21:04.550611 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:21:04.552772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:04.554442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:21:04.557402 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:04.562651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:21:04.565055 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:21:04.568208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:21:04.572567 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:21:04.574312 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:21:04.575647 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:21:04.588729 systemd-journald[1212]: Time spent on flushing to /var/log/journal/2226b80a086a436782cc4c50382c3cfb is 73.468ms for 1048 entries. Jul 10 00:21:04.588729 systemd-journald[1212]: System Journal (/var/log/journal/2226b80a086a436782cc4c50382c3cfb) is 8M, max 195.6M, 187.6M free. Jul 10 00:21:04.800814 kernel: loop0: detected capacity change from 0 to 146240 Jul 10 00:21:04.800877 systemd-journald[1212]: Received client request to flush runtime journal. Jul 10 00:21:04.800921 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:21:04.595880 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:21:04.599678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:21:04.604413 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:21:04.799426 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:21:04.804757 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:21:04.807153 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 10 00:21:04.809095 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:21:04.807174 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 10 00:21:04.810519 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:21:04.814872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:21:04.823311 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:21:04.844569 kernel: loop2: detected capacity change from 0 to 113872 Jul 10 00:21:04.875297 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:21:04.880198 kernel: loop3: detected capacity change from 0 to 146240 Jul 10 00:21:04.882580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:21:04.905414 kernel: loop4: detected capacity change from 0 to 221472 Jul 10 00:21:04.919314 kernel: loop5: detected capacity change from 0 to 113872 Jul 10 00:21:04.918343 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jul 10 00:21:04.918371 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Jul 10 00:21:04.925435 (sd-merge)[1278]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:21:04.926376 (sd-merge)[1278]: Merged extensions into '/usr'. Jul 10 00:21:04.926672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:21:05.116508 systemd[1]: Reload requested from client PID 1256 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:21:05.116532 systemd[1]: Reloading... Jul 10 00:21:05.219337 zram_generator::config[1309]: No configuration found. Jul 10 00:21:05.357797 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:05.447100 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:21:05.447826 systemd[1]: Reloading finished in 330 ms. Jul 10 00:21:05.472497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:21:05.480837 ldconfig[1251]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:21:05.484119 systemd[1]: Starting ensure-sysext.service... Jul 10 00:21:05.486631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:21:05.702114 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:21:05.704989 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:21:05.705496 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:21:05.705850 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:21:05.706111 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:21:05.707321 systemd[1]: Reload requested from client PID 1343 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:21:05.707471 systemd[1]: Reloading... Jul 10 00:21:05.707517 systemd-tmpfiles[1344]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:21:05.707804 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 10 00:21:05.707873 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jul 10 00:21:05.712914 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:05.713060 systemd-tmpfiles[1344]: Skipping /boot Jul 10 00:21:05.728354 systemd-tmpfiles[1344]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:21:05.728532 systemd-tmpfiles[1344]: Skipping /boot Jul 10 00:21:05.775326 zram_generator::config[1372]: No configuration found. Jul 10 00:21:05.911032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:06.003748 systemd[1]: Reloading finished in 295 ms. Jul 10 00:21:06.030926 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:21:06.053265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:21:06.062552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:06.065460 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:21:06.067833 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:21:06.074500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:21:06.077964 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:21:06.080651 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:21:06.084116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.085408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:06.090343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:06.094523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:06.099357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:06.100530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:06.100651 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:06.100761 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.101995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:06.102240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:06.108056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:06.108305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:06.111930 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:06.112246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:06.116291 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:21:06.122474 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:21:06.127905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.128165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:06.129867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:06.133150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:06.136958 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Jul 10 00:21:06.143786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:06.145030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:06.145216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:06.146758 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:21:06.151480 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:21:06.152600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.154359 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:21:06.156093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:06.156350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:06.158028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:06.158253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:06.160037 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:06.160342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:06.168268 augenrules[1451]: No rules Jul 10 00:21:06.168249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:21:06.169977 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:06.170243 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:06.174135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:21:06.180061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.185154 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:06.186580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:21:06.191699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:21:06.196695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:21:06.203483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:21:06.210370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:21:06.211658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:21:06.211818 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:21:06.218257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:21:06.219572 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:21:06.219691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:21:06.222234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:21:06.223746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:21:06.227142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:21:06.231548 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:21:06.233506 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:21:06.233744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:21:06.246928 systemd[1]: Finished ensure-sysext.service. Jul 10 00:21:06.251445 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:21:06.253461 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:21:06.253597 augenrules[1478]: /sbin/augenrules: No change Jul 10 00:21:06.253718 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:21:06.260966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:21:06.261040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:21:06.264528 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:21:06.272293 augenrules[1520]: No rules Jul 10 00:21:06.272397 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:06.272752 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:06.459103 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:21:06.468017 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:21:06.469650 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:21:06.499911 systemd-resolved[1414]: Positive Trust Anchors: Jul 10 00:21:06.500395 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:21:06.500515 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:21:06.501921 systemd-networkd[1493]: lo: Link UP Jul 10 00:21:06.501936 systemd-networkd[1493]: lo: Gained carrier Jul 10 00:21:06.504312 systemd-networkd[1493]: Enumeration completed Jul 10 00:21:06.504414 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:21:06.505888 systemd-resolved[1414]: Defaulting to hostname 'linux'. Jul 10 00:21:06.513925 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:06.514519 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:21:06.521416 systemd-networkd[1493]: eth0: Link UP Jul 10 00:21:06.521690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:21:06.522805 systemd-networkd[1493]: eth0: Gained carrier Jul 10 00:21:06.522885 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:21:06.524202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 10 00:21:06.524500 systemd[1]: Reached target network.target - Network. Jul 10 00:21:06.525455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:21:06.526932 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:21:06.528268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:21:06.529565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:21:06.530827 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:21:06.531332 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:21:06.532242 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jul 10 00:21:06.532385 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:21:07.235976 systemd-resolved[1414]: Clock change detected. Flushing caches. Jul 10 00:21:07.236047 systemd-timesyncd[1519]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:21:07.236097 systemd-timesyncd[1519]: Initial clock synchronization to Thu 2025-07-10 00:21:07.235934 UTC. Jul 10 00:21:07.236493 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:21:07.238056 kernel: ACPI: button: Power Button [PWRF] Jul 10 00:21:07.238355 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:21:07.239654 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:21:07.239679 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:21:07.240624 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:21:07.243031 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:21:07.243383 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:21:07.246445 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:21:07.253300 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:21:07.256238 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:21:07.257627 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:21:07.270158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:21:07.284056 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 10 00:21:07.285707 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:21:07.288493 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 10 00:21:07.288784 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 10 00:21:07.292218 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:21:07.333297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:21:07.335759 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:21:07.340438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:21:07.346940 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:21:07.349114 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:21:07.350210 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:07.350269 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:21:07.352308 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:21:07.354787 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:21:07.357725 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:21:07.362268 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:21:07.373182 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:21:07.374394 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:21:07.401318 jq[1557]: false Jul 10 00:21:07.412819 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:21:07.415730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:21:07.419484 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:21:07.424812 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:21:07.426341 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jul 10 00:21:07.426364 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Jul 10 00:21:07.430306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:21:07.431782 extend-filesystems[1560]: Found /dev/vda6 Jul 10 00:21:07.435483 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Jul 10 00:21:07.435476 oslogin_cache_refresh[1562]: Failure getting users, quitting Jul 10 00:21:07.435554 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:07.435511 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:21:07.435627 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Jul 10 00:21:07.435590 oslogin_cache_refresh[1562]: Refreshing group entry cache Jul 10 00:21:07.436309 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:21:07.441819 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Jul 10 00:21:07.441819 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:07.441717 oslogin_cache_refresh[1562]: Failure getting groups, quitting Jul 10 00:21:07.441731 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:21:07.444398 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:21:07.446815 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:21:07.448790 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:21:07.452282 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:21:07.459141 extend-filesystems[1560]: Found /dev/vda9 Jul 10 00:21:07.785593 extend-filesystems[1560]: Checking size of /dev/vda9 Jul 10 00:21:07.781854 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:21:07.789520 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:21:07.838337 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:21:07.840516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:21:07.840775 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:21:07.841313 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:21:07.841567 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:21:07.843229 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:21:07.843496 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:21:07.843694 jq[1582]: true Jul 10 00:21:07.847385 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:21:07.855081 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:21:07.872165 update_engine[1576]: I20250710 00:21:07.871692 1576 main.cc:92] Flatcar Update Engine starting Jul 10 00:21:07.881569 jq[1591]: true Jul 10 00:21:07.886564 kernel: kvm_amd: TSC scaling supported Jul 10 00:21:07.886600 kernel: kvm_amd: Nested Virtualization enabled Jul 10 00:21:07.886613 kernel: kvm_amd: Nested Paging enabled Jul 10 00:21:07.887084 kernel: kvm_amd: LBR virtualization supported Jul 10 00:21:07.889102 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 10 00:21:07.889183 kernel: kvm_amd: Virtual GIF supported Jul 10 00:21:07.917468 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:21:07.941642 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:21:07.941648 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Jul 10 00:21:07.941669 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:21:07.942772 systemd-logind[1573]: New seat seat0. Jul 10 00:21:07.950299 extend-filesystems[1560]: Resized partition /dev/vda9 Jul 10 00:21:07.950809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:07.981749 extend-filesystems[1625]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 00:21:07.985285 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:21:07.990891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:21:07.991236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:07.998606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:21:08.077285 tar[1590]: linux-amd64/helm Jul 10 00:21:08.081321 dbus-daemon[1554]: [system] SELinux support is enabled Jul 10 00:21:08.082155 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:21:08.085050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:21:08.085193 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:21:08.085380 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:21:08.085427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:21:08.093077 update_engine[1576]: I20250710 00:21:08.092527 1576 update_check_scheduler.cc:74] Next update check in 6m53s Jul 10 00:21:08.093579 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:21:08.105547 dbus-daemon[1554]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:21:08.113004 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:21:08.146058 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:21:08.234030 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:21:08.266571 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:21:08.271166 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:21:08.313726 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:21:08.314249 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:21:08.329650 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:21:08.402584 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:21:08.408427 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:21:08.413293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:21:08.413978 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:21:08.526278 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:21:08.578668 kernel: EDAC MC: Ver: 3.0.0 Jul 10 00:21:08.601941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:21:08.754103 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:21:08.850772 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:21:08.853616 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:33754.service - OpenSSH per-connection server daemon (10.0.0.1:33754). Jul 10 00:21:09.011553 sshd[1658]: Connection closed by authenticating user core 10.0.0.1 port 33754 [preauth] Jul 10 00:21:08.944841 systemd[1]: sshd@0-10.0.0.84:22-10.0.0.1:33754.service: Deactivated successfully. Jul 10 00:21:09.012165 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:21:09.012165 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:21:09.012165 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:21:09.016756 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Jul 10 00:21:09.018148 tar[1590]: linux-amd64/LICENSE Jul 10 00:21:09.018148 tar[1590]: linux-amd64/README.md Jul 10 00:21:09.020385 bash[1624]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:21:09.021711 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:21:09.022214 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:21:09.025958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:21:09.034371 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:21:09.058177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:21:09.113121 containerd[1592]: time="2025-07-10T00:21:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:21:09.115268 containerd[1592]: time="2025-07-10T00:21:09.115193279Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:21:09.133142 containerd[1592]: time="2025-07-10T00:21:09.133058733Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.11µs" Jul 10 00:21:09.133142 containerd[1592]: time="2025-07-10T00:21:09.133111692Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:21:09.133142 containerd[1592]: time="2025-07-10T00:21:09.133131008Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:21:09.133446 containerd[1592]: time="2025-07-10T00:21:09.133410453Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:21:09.133446 containerd[1592]: time="2025-07-10T00:21:09.133433846Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:21:09.133514 containerd[1592]: time="2025-07-10T00:21:09.133468491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:09.133575 containerd[1592]: time="2025-07-10T00:21:09.133549784Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:21:09.133575 containerd[1592]: time="2025-07-10T00:21:09.133565533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.133934 containerd[1592]: time="2025-07-10T00:21:09.133899369Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.133934 containerd[1592]: time="2025-07-10T00:21:09.133917473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:09.133934 containerd[1592]: time="2025-07-10T00:21:09.133931019Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:21:09.134004 containerd[1592]: time="2025-07-10T00:21:09.133941298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:21:09.134091 containerd[1592]: time="2025-07-10T00:21:09.134067094Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.134382 containerd[1592]: time="2025-07-10T00:21:09.134344244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.134406 containerd[1592]: time="2025-07-10T00:21:09.134394678Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:21:09.134428 containerd[1592]: time="2025-07-10T00:21:09.134406060Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:21:09.134463 containerd[1592]: time="2025-07-10T00:21:09.134444281Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:21:09.135932 containerd[1592]: time="2025-07-10T00:21:09.135810834Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:21:09.136133 containerd[1592]: time="2025-07-10T00:21:09.136110827Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:21:09.144275 systemd-networkd[1493]: eth0: Gained IPv6LL Jul 10 00:21:09.148271 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:21:09.150275 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:21:09.153276 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:21:09.175406 containerd[1592]: time="2025-07-10T00:21:09.175288589Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175433651Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175459359Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175477393Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175496579Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175515805Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175533629Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175553235Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:21:09.175572 containerd[1592]: time="2025-07-10T00:21:09.175573934Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:21:09.175824 containerd[1592]: time="2025-07-10T00:21:09.175588542Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:21:09.175824 containerd[1592]: time="2025-07-10T00:21:09.175602548Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:21:09.175824 containerd[1592]: time="2025-07-10T00:21:09.175620201Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:21:09.175910 containerd[1592]: time="2025-07-10T00:21:09.175876842Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:21:09.175910 containerd[1592]: time="2025-07-10T00:21:09.175907039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:21:09.175963 containerd[1592]: time="2025-07-10T00:21:09.175922879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:21:09.175963 containerd[1592]: time="2025-07-10T00:21:09.175934921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:21:09.175963 containerd[1592]: time="2025-07-10T00:21:09.175948066Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:21:09.176064 containerd[1592]: time="2025-07-10T00:21:09.175981569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:21:09.176064 containerd[1592]: time="2025-07-10T00:21:09.175997769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:21:09.176206 containerd[1592]: time="2025-07-10T00:21:09.176155074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:21:09.176206 containerd[1592]: time="2025-07-10T00:21:09.176187104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:21:09.176206 containerd[1592]: time="2025-07-10T00:21:09.176200008Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:21:09.176285 containerd[1592]: time="2025-07-10T00:21:09.176210418Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:21:09.176339 containerd[1592]: time="2025-07-10T00:21:09.176314723Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:21:09.176339 containerd[1592]: time="2025-07-10T00:21:09.176338318Z" level=info msg="Start snapshots syncer" Jul 10 00:21:09.176406 containerd[1592]: time="2025-07-10T00:21:09.176380156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:21:09.176795 containerd[1592]: time="2025-07-10T00:21:09.176740081Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:21:09.176928 containerd[1592]: time="2025-07-10T00:21:09.176805734Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:21:09.178216 containerd[1592]: time="2025-07-10T00:21:09.178165584Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:21:09.178350 containerd[1592]: time="2025-07-10T00:21:09.178317559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:21:09.178392 containerd[1592]: time="2025-07-10T00:21:09.178366832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:21:09.178392 containerd[1592]: time="2025-07-10T00:21:09.178381730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:21:09.178392 containerd[1592]: time="2025-07-10T00:21:09.178392931Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178433497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178446151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178460928Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178498579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178511784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:21:09.178596 containerd[1592]: time="2025-07-10T00:21:09.178524327Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:21:09.179722 containerd[1592]: time="2025-07-10T00:21:09.179682639Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:09.179722 containerd[1592]: time="2025-07-10T00:21:09.179715090Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:21:09.179798 containerd[1592]: time="2025-07-10T00:21:09.179729377Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:09.179798 containerd[1592]: time="2025-07-10T00:21:09.179742341Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:21:09.179798 containerd[1592]: time="2025-07-10T00:21:09.179752661Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:21:09.179798 containerd[1592]: time="2025-07-10T00:21:09.179764844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:21:09.179798 containerd[1592]: time="2025-07-10T00:21:09.179782547Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:21:09.179911 containerd[1592]: time="2025-07-10T00:21:09.179804688Z" level=info msg="runtime interface created" Jul 10 00:21:09.179911 containerd[1592]: time="2025-07-10T00:21:09.179810399Z" level=info msg="created NRI interface" Jul 10 00:21:09.179911 containerd[1592]: time="2025-07-10T00:21:09.179818544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:21:09.179911 containerd[1592]: time="2025-07-10T00:21:09.179830296Z" level=info msg="Connect containerd service" Jul 10 00:21:09.179911 containerd[1592]: time="2025-07-10T00:21:09.179854381Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:21:09.180968 containerd[1592]: time="2025-07-10T00:21:09.180922915Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:21:09.186982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:09.190592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:21:09.298697 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:21:09.299198 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:21:09.302179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:21:09.316680 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:21:09.497510 containerd[1592]: time="2025-07-10T00:21:09.497390475Z" level=info msg="Start subscribing containerd event" Jul 10 00:21:09.497689 containerd[1592]: time="2025-07-10T00:21:09.497558019Z" level=info msg="Start recovering state" Jul 10 00:21:09.497900 containerd[1592]: time="2025-07-10T00:21:09.497867980Z" level=info msg="Start event monitor" Jul 10 00:21:09.497987 containerd[1592]: time="2025-07-10T00:21:09.497879763Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:21:09.498099 containerd[1592]: time="2025-07-10T00:21:09.498031497Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:21:09.498099 containerd[1592]: time="2025-07-10T00:21:09.497931650Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:21:09.498099 containerd[1592]: time="2025-07-10T00:21:09.498096299Z" level=info msg="Start streaming server" Jul 10 00:21:09.498169 containerd[1592]: time="2025-07-10T00:21:09.498113651Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:21:09.498169 containerd[1592]: time="2025-07-10T00:21:09.498128800Z" level=info msg="runtime interface starting up..." Jul 10 00:21:09.498169 containerd[1592]: time="2025-07-10T00:21:09.498137035Z" level=info msg="starting plugins..." Jul 10 00:21:09.498169 containerd[1592]: time="2025-07-10T00:21:09.498168574Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:21:09.498635 containerd[1592]: time="2025-07-10T00:21:09.498372667Z" level=info msg="containerd successfully booted in 0.386058s" Jul 10 00:21:09.498584 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:21:10.670063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:10.677102 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:21:10.679101 systemd[1]: Startup finished in 4.504s (kernel) + 8.875s (initrd) + 6.504s (userspace) = 19.885s. Jul 10 00:21:10.798688 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:11.525540 kubelet[1714]: E0710 00:21:11.525428 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:11.530314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:11.530520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:11.530982 systemd[1]: kubelet.service: Consumed 2.105s CPU time, 265.4M memory peak. Jul 10 00:21:18.966734 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). Jul 10 00:21:19.037500 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:19.039759 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.047876 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:21:19.049186 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:21:19.057492 systemd-logind[1573]: New session 1 of user core. Jul 10 00:21:19.077408 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:21:19.080982 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:21:19.100189 (systemd)[1731]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:21:19.103155 systemd-logind[1573]: New session c1 of user core. Jul 10 00:21:19.275559 systemd[1731]: Queued start job for default target default.target. Jul 10 00:21:19.292346 systemd[1731]: Created slice app.slice - User Application Slice. Jul 10 00:21:19.292373 systemd[1731]: Reached target paths.target - Paths. Jul 10 00:21:19.292417 systemd[1731]: Reached target timers.target - Timers. Jul 10 00:21:19.294175 systemd[1731]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:21:19.307758 systemd[1731]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:21:19.307895 systemd[1731]: Reached target sockets.target - Sockets. Jul 10 00:21:19.307935 systemd[1731]: Reached target basic.target - Basic System. Jul 10 00:21:19.307985 systemd[1731]: Reached target default.target - Main User Target. Jul 10 00:21:19.308040 systemd[1731]: Startup finished in 196ms. Jul 10 00:21:19.308576 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:21:19.310629 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:21:19.374139 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:40264.service - OpenSSH per-connection server daemon (10.0.0.1:40264). Jul 10 00:21:19.422576 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 40264 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:19.424276 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.429186 systemd-logind[1573]: New session 2 of user core. Jul 10 00:21:19.439206 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:21:19.495405 sshd[1744]: Connection closed by 10.0.0.1 port 40264 Jul 10 00:21:19.495805 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:19.510067 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:40264.service: Deactivated successfully. Jul 10 00:21:19.511940 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:21:19.512837 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:21:19.516129 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Jul 10 00:21:19.516923 systemd-logind[1573]: Removed session 2. Jul 10 00:21:19.579699 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:19.581483 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.586247 systemd-logind[1573]: New session 3 of user core. Jul 10 00:21:19.600169 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:21:19.651061 sshd[1752]: Connection closed by 10.0.0.1 port 40266 Jul 10 00:21:19.651420 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:19.665462 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:40266.service: Deactivated successfully. Jul 10 00:21:19.667580 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:21:19.668522 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:21:19.671793 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:35550.service - OpenSSH per-connection server daemon (10.0.0.1:35550). Jul 10 00:21:19.672697 systemd-logind[1573]: Removed session 3. Jul 10 00:21:19.728539 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 35550 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:19.730236 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.735446 systemd-logind[1573]: New session 4 of user core. Jul 10 00:21:19.749217 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:21:19.803744 sshd[1760]: Connection closed by 10.0.0.1 port 35550 Jul 10 00:21:19.804178 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:19.813568 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:35550.service: Deactivated successfully. Jul 10 00:21:19.815548 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:21:19.816436 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:21:19.819941 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:35562.service - OpenSSH per-connection server daemon (10.0.0.1:35562). Jul 10 00:21:19.820879 systemd-logind[1573]: Removed session 4. Jul 10 00:21:19.877032 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 35562 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:19.878686 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:19.883807 systemd-logind[1573]: New session 5 of user core. Jul 10 00:21:19.893158 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:21:19.953456 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:21:19.953775 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:19.972128 sudo[1769]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:19.974346 sshd[1768]: Connection closed by 10.0.0.1 port 35562 Jul 10 00:21:19.974732 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:19.988551 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:35562.service: Deactivated successfully. Jul 10 00:21:19.990717 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:21:19.991660 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:21:19.995462 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:35576.service - OpenSSH per-connection server daemon (10.0.0.1:35576). Jul 10 00:21:19.996389 systemd-logind[1573]: Removed session 5. Jul 10 00:21:20.061164 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 35576 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:20.063339 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.068386 systemd-logind[1573]: New session 6 of user core. Jul 10 00:21:20.075156 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:21:20.130490 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:21:20.130815 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:20.137575 sudo[1779]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:20.144668 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:21:20.144991 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:20.155592 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:21:20.206457 augenrules[1801]: No rules Jul 10 00:21:20.208454 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:21:20.208756 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:21:20.210094 sudo[1778]: pam_unix(sudo:session): session closed for user root Jul 10 00:21:20.212043 sshd[1777]: Connection closed by 10.0.0.1 port 35576 Jul 10 00:21:20.212436 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jul 10 00:21:20.227332 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:35576.service: Deactivated successfully. Jul 10 00:21:20.229786 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:21:20.230704 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:21:20.235227 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:35582.service - OpenSSH per-connection server daemon (10.0.0.1:35582). Jul 10 00:21:20.235956 systemd-logind[1573]: Removed session 6. Jul 10 00:21:20.297890 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 35582 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:21:20.299460 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:21:20.304483 systemd-logind[1573]: New session 7 of user core. Jul 10 00:21:20.314184 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:21:20.369897 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:21:20.370332 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:21:20.972678 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:21:20.994389 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:21:21.202479 dockerd[1833]: time="2025-07-10T00:21:21.202404635Z" level=info msg="Starting up" Jul 10 00:21:21.203861 dockerd[1833]: time="2025-07-10T00:21:21.203828204Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:21:21.636469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:21:21.638197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:21.915588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:21.932379 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:22.764105 kubelet[1864]: E0710 00:21:22.763992 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:22.770811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:22.771043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:22.771431 systemd[1]: kubelet.service: Consumed 310ms CPU time, 111.1M memory peak. Jul 10 00:21:22.963743 dockerd[1833]: time="2025-07-10T00:21:22.963664486Z" level=info msg="Loading containers: start." Jul 10 00:21:22.975263 kernel: Initializing XFRM netlink socket Jul 10 00:21:23.663943 systemd-networkd[1493]: docker0: Link UP Jul 10 00:21:23.780201 dockerd[1833]: time="2025-07-10T00:21:23.780053406Z" level=info msg="Loading containers: done." Jul 10 00:21:23.794912 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3532421901-merged.mount: Deactivated successfully. Jul 10 00:21:23.986944 dockerd[1833]: time="2025-07-10T00:21:23.986805260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:21:23.986944 dockerd[1833]: time="2025-07-10T00:21:23.986906349Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:21:23.987444 dockerd[1833]: time="2025-07-10T00:21:23.987093991Z" level=info msg="Initializing buildkit" Jul 10 00:21:24.397738 dockerd[1833]: time="2025-07-10T00:21:24.397535375Z" level=info msg="Completed buildkit initialization" Jul 10 00:21:24.402003 dockerd[1833]: time="2025-07-10T00:21:24.401938052Z" level=info msg="Daemon has completed initialization" Jul 10 00:21:24.402251 dockerd[1833]: time="2025-07-10T00:21:24.402062756Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:21:24.402382 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:21:25.176696 containerd[1592]: time="2025-07-10T00:21:25.176649793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:21:25.777199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847223878.mount: Deactivated successfully. Jul 10 00:21:27.912920 containerd[1592]: time="2025-07-10T00:21:27.912811533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:27.913759 containerd[1592]: time="2025-07-10T00:21:27.913663982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 10 00:21:27.914909 containerd[1592]: time="2025-07-10T00:21:27.914831711Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:27.918417 containerd[1592]: time="2025-07-10T00:21:27.918334080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:27.920303 containerd[1592]: time="2025-07-10T00:21:27.920237639Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.743535328s" Jul 10 00:21:27.920303 containerd[1592]: time="2025-07-10T00:21:27.920299776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:21:27.921433 containerd[1592]: time="2025-07-10T00:21:27.921369342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:21:29.313294 containerd[1592]: time="2025-07-10T00:21:29.313200482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.314182 containerd[1592]: time="2025-07-10T00:21:29.314111220Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 10 00:21:29.316206 containerd[1592]: time="2025-07-10T00:21:29.316135015Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.319682 containerd[1592]: time="2025-07-10T00:21:29.319622035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:29.320801 containerd[1592]: time="2025-07-10T00:21:29.320738348Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.399322299s" Jul 10 00:21:29.320801 containerd[1592]: time="2025-07-10T00:21:29.320795355Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:21:29.321569 containerd[1592]: time="2025-07-10T00:21:29.321533880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:21:31.998499 containerd[1592]: time="2025-07-10T00:21:31.998396513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.000938 containerd[1592]: time="2025-07-10T00:21:32.000903985Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 10 00:21:32.002600 containerd[1592]: time="2025-07-10T00:21:32.002564319Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.005364 containerd[1592]: time="2025-07-10T00:21:32.005319797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:32.006849 containerd[1592]: time="2025-07-10T00:21:32.006795935Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.685214936s" Jul 10 00:21:32.006893 containerd[1592]: time="2025-07-10T00:21:32.006849505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:21:32.007517 containerd[1592]: time="2025-07-10T00:21:32.007468546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:21:32.976115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:21:32.978848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:33.232005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:33.257639 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:33.328774 kubelet[2130]: E0710 00:21:33.328681 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:33.335175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:33.335408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:33.335803 systemd[1]: kubelet.service: Consumed 315ms CPU time, 113M memory peak. Jul 10 00:21:33.876116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618993970.mount: Deactivated successfully. Jul 10 00:21:35.245159 containerd[1592]: time="2025-07-10T00:21:35.245078721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:35.246370 containerd[1592]: time="2025-07-10T00:21:35.246304480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 10 00:21:35.247814 containerd[1592]: time="2025-07-10T00:21:35.247760791Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:35.249923 containerd[1592]: time="2025-07-10T00:21:35.249876228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:35.250555 containerd[1592]: time="2025-07-10T00:21:35.250507262Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 3.243000263s" Jul 10 00:21:35.250555 containerd[1592]: time="2025-07-10T00:21:35.250548679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:21:35.251113 containerd[1592]: time="2025-07-10T00:21:35.251075678Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:21:35.725168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330299116.mount: Deactivated successfully. Jul 10 00:21:38.843932 containerd[1592]: time="2025-07-10T00:21:38.843857576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:38.845108 containerd[1592]: time="2025-07-10T00:21:38.845062075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 10 00:21:38.847319 containerd[1592]: time="2025-07-10T00:21:38.847255238Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:38.850455 containerd[1592]: time="2025-07-10T00:21:38.850391630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:38.851747 containerd[1592]: time="2025-07-10T00:21:38.851686979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.600577939s" Jul 10 00:21:38.851747 containerd[1592]: time="2025-07-10T00:21:38.851733677Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:21:38.852285 containerd[1592]: time="2025-07-10T00:21:38.852250406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:21:39.341784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008124429.mount: Deactivated successfully. Jul 10 00:21:39.349315 containerd[1592]: time="2025-07-10T00:21:39.349254035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:39.350185 containerd[1592]: time="2025-07-10T00:21:39.350125619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 10 00:21:39.352382 containerd[1592]: time="2025-07-10T00:21:39.351375042Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:39.353722 containerd[1592]: time="2025-07-10T00:21:39.353673112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:21:39.354488 containerd[1592]: time="2025-07-10T00:21:39.354457243Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.175308ms" Jul 10 00:21:39.354488 containerd[1592]: time="2025-07-10T00:21:39.354491547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:21:39.355068 containerd[1592]: time="2025-07-10T00:21:39.354987327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:21:39.823888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434214818.mount: Deactivated successfully. Jul 10 00:21:43.314611 containerd[1592]: time="2025-07-10T00:21:43.314503790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:43.343301 containerd[1592]: time="2025-07-10T00:21:43.343206765Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 10 00:21:43.374932 containerd[1592]: time="2025-07-10T00:21:43.374838396Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:43.395645 containerd[1592]: time="2025-07-10T00:21:43.395559306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:21:43.397855 containerd[1592]: time="2025-07-10T00:21:43.397801677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.042768634s" Jul 10 00:21:43.397941 containerd[1592]: time="2025-07-10T00:21:43.397859639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:21:43.475940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:21:43.478544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:43.725919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:43.754992 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:21:43.830699 kubelet[2274]: E0710 00:21:43.830628 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:21:43.834397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:21:43.834812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:21:43.835345 systemd[1]: kubelet.service: Consumed 281ms CPU time, 110.1M memory peak. Jul 10 00:21:45.946089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:45.946371 systemd[1]: kubelet.service: Consumed 281ms CPU time, 110.1M memory peak. Jul 10 00:21:45.949718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:45.977959 systemd[1]: Reload requested from client PID 2303 ('systemctl') (unit session-7.scope)... Jul 10 00:21:45.977974 systemd[1]: Reloading... Jul 10 00:21:46.091051 zram_generator::config[2352]: No configuration found. Jul 10 00:21:46.745678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:46.872838 systemd[1]: Reloading finished in 894 ms. Jul 10 00:21:46.942746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:21:46.942883 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:21:46.943347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:46.943420 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.2M memory peak. Jul 10 00:21:46.945719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:47.747867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:47.761323 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:47.800033 kubelet[2394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:47.800033 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:47.800033 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:47.800405 kubelet[2394]: I0710 00:21:47.800119 2394 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:48.040223 kubelet[2394]: I0710 00:21:48.040108 2394 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:21:48.040223 kubelet[2394]: I0710 00:21:48.040139 2394 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:48.040457 kubelet[2394]: I0710 00:21:48.040405 2394 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:21:48.068367 kubelet[2394]: E0710 00:21:48.068309 2394 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:48.069046 kubelet[2394]: I0710 00:21:48.069001 2394 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:48.078499 kubelet[2394]: I0710 00:21:48.078412 2394 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:48.085255 kubelet[2394]: I0710 00:21:48.085205 2394 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:48.085932 kubelet[2394]: I0710 00:21:48.085896 2394 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:21:48.086135 kubelet[2394]: I0710 00:21:48.086087 2394 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:48.086357 kubelet[2394]: I0710 00:21:48.086125 2394 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:48.086508 kubelet[2394]: I0710 00:21:48.086375 2394 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:48.086508 kubelet[2394]: I0710 00:21:48.086384 2394 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:21:48.086549 kubelet[2394]: I0710 00:21:48.086518 2394 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:48.088819 kubelet[2394]: I0710 00:21:48.088770 2394 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:21:48.088819 kubelet[2394]: I0710 00:21:48.088801 2394 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:48.088819 kubelet[2394]: I0710 00:21:48.088836 2394 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:21:48.089095 kubelet[2394]: I0710 00:21:48.088885 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:48.092580 kubelet[2394]: I0710 00:21:48.092496 2394 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:48.092995 kubelet[2394]: I0710 00:21:48.092975 2394 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:48.093859 kubelet[2394]: W0710 00:21:48.093819 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:21:48.096394 kubelet[2394]: I0710 00:21:48.096362 2394 server.go:1274] "Started kubelet" Jul 10 00:21:48.097543 kubelet[2394]: I0710 00:21:48.096947 2394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:48.098323 kubelet[2394]: W0710 00:21:48.097948 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:48.098323 kubelet[2394]: I0710 00:21:48.098031 2394 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:48.098323 kubelet[2394]: E0710 00:21:48.098038 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:48.098323 kubelet[2394]: I0710 00:21:48.097992 2394 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:48.098323 kubelet[2394]: I0710 00:21:48.098091 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:48.098323 kubelet[2394]: W0710 00:21:48.098136 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:48.098323 kubelet[2394]: E0710 00:21:48.098169 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:48.099283 kubelet[2394]: I0710 00:21:48.099137 2394 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:21:48.101507 kubelet[2394]: I0710 00:21:48.100606 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:48.102605 kubelet[2394]: E0710 00:21:48.102578 2394 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:48.102820 kubelet[2394]: I0710 00:21:48.102804 2394 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:21:48.102930 kubelet[2394]: I0710 00:21:48.102902 2394 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:21:48.102930 kubelet[2394]: E0710 00:21:48.101378 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bbf329970c29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:21:48.096334889 +0000 UTC m=+0.331073464,LastTimestamp:2025-07-10 00:21:48.096334889 +0000 UTC m=+0.331073464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:21:48.103083 kubelet[2394]: E0710 00:21:48.102904 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Jul 10 00:21:48.103083 kubelet[2394]: E0710 00:21:48.102849 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.103083 kubelet[2394]: I0710 00:21:48.103037 2394 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:48.103550 kubelet[2394]: W0710 00:21:48.103505 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:48.103591 kubelet[2394]: E0710 00:21:48.103556 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:48.103673 kubelet[2394]: I0710 00:21:48.103654 2394 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:48.103830 kubelet[2394]: I0710 00:21:48.103811 2394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:48.107377 kubelet[2394]: I0710 00:21:48.107338 2394 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:48.119803 kubelet[2394]: I0710 00:21:48.119770 2394 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:21:48.119803 kubelet[2394]: I0710 00:21:48.119791 2394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:48.119803 kubelet[2394]: I0710 00:21:48.119812 2394 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:48.120972 kubelet[2394]: I0710 00:21:48.120851 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:48.122499 kubelet[2394]: I0710 00:21:48.122473 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:48.122556 kubelet[2394]: I0710 00:21:48.122527 2394 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:21:48.122588 kubelet[2394]: I0710 00:21:48.122563 2394 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:21:48.122637 kubelet[2394]: E0710 00:21:48.122616 2394 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:48.123511 kubelet[2394]: W0710 00:21:48.123322 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:48.123511 kubelet[2394]: E0710 00:21:48.123359 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:48.203287 kubelet[2394]: E0710 00:21:48.203237 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.223536 kubelet[2394]: E0710 00:21:48.223468 2394 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:21:48.304104 kubelet[2394]: E0710 00:21:48.303864 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.304571 kubelet[2394]: E0710 00:21:48.304508 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Jul 10 00:21:48.404798 kubelet[2394]: E0710 00:21:48.404726 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.424137 kubelet[2394]: E0710 00:21:48.424058 2394 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:21:48.505740 kubelet[2394]: E0710 00:21:48.505658 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.607036 kubelet[2394]: E0710 00:21:48.606828 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.706102 kubelet[2394]: E0710 00:21:48.706033 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Jul 10 00:21:48.707085 kubelet[2394]: E0710 00:21:48.706985 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.807686 kubelet[2394]: E0710 00:21:48.807609 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.824954 kubelet[2394]: E0710 00:21:48.824872 2394 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:21:48.908585 kubelet[2394]: E0710 00:21:48.908336 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:48.975683 kubelet[2394]: E0710 00:21:48.975511 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bbf329970c29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:21:48.096334889 +0000 UTC m=+0.331073464,LastTimestamp:2025-07-10 00:21:48.096334889 +0000 UTC m=+0.331073464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:21:48.996575 kubelet[2394]: I0710 00:21:48.996427 2394 policy_none.go:49] "None policy: Start" Jul 10 00:21:48.997675 kubelet[2394]: I0710 00:21:48.997630 2394 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:21:48.997675 kubelet[2394]: I0710 00:21:48.997665 2394 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:49.008517 kubelet[2394]: E0710 00:21:49.008454 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:21:49.009393 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:21:49.022297 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:21:49.026460 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:21:49.049940 kubelet[2394]: I0710 00:21:49.049755 2394 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:49.050151 kubelet[2394]: I0710 00:21:49.050128 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:49.050186 kubelet[2394]: I0710 00:21:49.050147 2394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:49.050473 kubelet[2394]: I0710 00:21:49.050438 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:49.051801 kubelet[2394]: E0710 00:21:49.051768 2394 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:21:49.152635 kubelet[2394]: I0710 00:21:49.152569 2394 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:21:49.153061 kubelet[2394]: E0710 00:21:49.153026 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 10 00:21:49.198084 kubelet[2394]: W0710 00:21:49.197866 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:49.198084 kubelet[2394]: E0710 00:21:49.197951 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:49.340642 kubelet[2394]: W0710 00:21:49.340556 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:49.340642 kubelet[2394]: E0710 00:21:49.340643 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:49.355548 kubelet[2394]: I0710 00:21:49.355488 2394 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:21:49.356043 kubelet[2394]: E0710 00:21:49.355977 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 10 00:21:49.507454 kubelet[2394]: E0710 00:21:49.507374 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Jul 10 00:21:49.522242 kubelet[2394]: W0710 00:21:49.522151 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:49.522242 kubelet[2394]: E0710 00:21:49.522242 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:49.635964 systemd[1]: Created slice kubepods-burstable-poda0b285b368242bd781918d613f095493.slice - libcontainer container kubepods-burstable-poda0b285b368242bd781918d613f095493.slice. Jul 10 00:21:49.661839 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 10 00:21:49.677165 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 10 00:21:49.678222 kubelet[2394]: W0710 00:21:49.678177 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 10 00:21:49.678309 kubelet[2394]: E0710 00:21:49.678229 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:49.712004 kubelet[2394]: I0710 00:21:49.711941 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:49.712004 kubelet[2394]: I0710 00:21:49.711991 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:49.712152 kubelet[2394]: I0710 00:21:49.712045 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:49.712152 kubelet[2394]: I0710 00:21:49.712076 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:49.712152 kubelet[2394]: I0710 00:21:49.712124 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:49.712152 kubelet[2394]: I0710 00:21:49.712145 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:49.712287 kubelet[2394]: I0710 00:21:49.712159 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:21:49.712287 kubelet[2394]: I0710 00:21:49.712175 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:49.712287 kubelet[2394]: I0710 00:21:49.712189 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:49.758475 kubelet[2394]: I0710 00:21:49.758382 2394 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:21:49.758876 kubelet[2394]: E0710 00:21:49.758830 2394 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 10 00:21:49.960180 containerd[1592]: time="2025-07-10T00:21:49.960114297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0b285b368242bd781918d613f095493,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:49.976091 containerd[1592]: time="2025-07-10T00:21:49.975992820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:49.980890 containerd[1592]: time="2025-07-10T00:21:49.980840739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:21:50.005883 containerd[1592]: time="2025-07-10T00:21:50.005831471Z" level=info msg="connecting to shim 16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f" address="unix:///run/containerd/s/c8567540506ef742ebca7861a340d197ac0e3c938e602ab02c0944ec05620806" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:50.102181 systemd[1]: Started cri-containerd-16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f.scope - libcontainer container 16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f. Jul 10 00:21:50.142970 containerd[1592]: time="2025-07-10T00:21:50.142912442Z" level=info msg="connecting to shim 4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488" address="unix:///run/containerd/s/aab061b05280ad5f8f577da9e27d8657ab11618dcf78cb7b3b82703659b75a81" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:50.158835 containerd[1592]: time="2025-07-10T00:21:50.158340844Z" level=info msg="connecting to shim 5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5" address="unix:///run/containerd/s/2fa76852c9b24fd07bbe0362bd1a98622985b738b29d97a5a926245e93d8f5e9" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:21:50.203215 kubelet[2394]: E0710 00:21:50.203160 2394 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:21:50.271770 containerd[1592]: time="2025-07-10T00:21:50.271727931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0b285b368242bd781918d613f095493,Namespace:kube-system,Attempt:0,} returns sandbox id \"16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f\"" Jul 10 00:21:50.273238 systemd[1]: Started cri-containerd-4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488.scope - libcontainer container 4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488. Jul 10 00:21:50.275234 containerd[1592]: time="2025-07-10T00:21:50.275187093Z" level=info msg="CreateContainer within sandbox \"16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:21:50.286951 containerd[1592]: time="2025-07-10T00:21:50.286901709Z" level=info msg="Container f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:50.286953 systemd[1]: Started cri-containerd-5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5.scope - libcontainer container 5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5. Jul 10 00:21:50.300056 containerd[1592]: time="2025-07-10T00:21:50.299988032Z" level=info msg="CreateContainer within sandbox \"16fa1d9a7e74ba8dbfd8dbf1160b1bbaacfcae112d1fc01aa2741f15dad60e8f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7\"" Jul 10 00:21:50.300921 containerd[1592]: time="2025-07-10T00:21:50.300900777Z" level=info msg="StartContainer for \"f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7\"" Jul 10 00:21:50.303352 containerd[1592]: time="2025-07-10T00:21:50.303312409Z" level=info msg="connecting to shim f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7" address="unix:///run/containerd/s/c8567540506ef742ebca7861a340d197ac0e3c938e602ab02c0944ec05620806" protocol=ttrpc version=3 Jul 10 00:21:50.347385 systemd[1]: Started cri-containerd-f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7.scope - libcontainer container f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7. Jul 10 00:21:50.350442 containerd[1592]: time="2025-07-10T00:21:50.350398469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488\"" Jul 10 00:21:50.354371 containerd[1592]: time="2025-07-10T00:21:50.354215320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5\"" Jul 10 00:21:50.355086 containerd[1592]: time="2025-07-10T00:21:50.355058331Z" level=info msg="CreateContainer within sandbox \"4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:21:50.357829 containerd[1592]: time="2025-07-10T00:21:50.356897807Z" level=info msg="CreateContainer within sandbox \"5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:21:50.366642 containerd[1592]: time="2025-07-10T00:21:50.366595200Z" level=info msg="Container 336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:50.373744 containerd[1592]: time="2025-07-10T00:21:50.373664051Z" level=info msg="Container a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:21:50.377819 containerd[1592]: time="2025-07-10T00:21:50.377765814Z" level=info msg="CreateContainer within sandbox \"4286a5b277225c17fc2b07c6a3bda94ad3b5bde209a806ee71c4c85038df0488\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab\"" Jul 10 00:21:50.378556 containerd[1592]: time="2025-07-10T00:21:50.378515409Z" level=info msg="StartContainer for \"336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab\"" Jul 10 00:21:50.379824 containerd[1592]: time="2025-07-10T00:21:50.379778568Z" level=info msg="connecting to shim 336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab" address="unix:///run/containerd/s/aab061b05280ad5f8f577da9e27d8657ab11618dcf78cb7b3b82703659b75a81" protocol=ttrpc version=3 Jul 10 00:21:50.383649 containerd[1592]: time="2025-07-10T00:21:50.383595941Z" level=info msg="CreateContainer within sandbox \"5e229d60257a98f1bf805eda93bc45aac9b30fcc0c2e2e1964e6846ba364d4c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c\"" Jul 10 00:21:50.384397 containerd[1592]: time="2025-07-10T00:21:50.384354422Z" level=info msg="StartContainer for \"a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c\"" Jul 10 00:21:50.386085 containerd[1592]: time="2025-07-10T00:21:50.386038141Z" level=info msg="connecting to shim a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c" address="unix:///run/containerd/s/2fa76852c9b24fd07bbe0362bd1a98622985b738b29d97a5a926245e93d8f5e9" protocol=ttrpc version=3 Jul 10 00:21:50.403436 systemd[1]: Started cri-containerd-336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab.scope - libcontainer container 336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab. Jul 10 00:21:50.409456 systemd[1]: Started cri-containerd-a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c.scope - libcontainer container a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c. Jul 10 00:21:50.434556 containerd[1592]: time="2025-07-10T00:21:50.434467673Z" level=info msg="StartContainer for \"f6dd61c84b9cba9ba199feb7e5a937a35e76ce98d4221d2dce6bbe1192cf6ca7\" returns successfully" Jul 10 00:21:50.500603 containerd[1592]: time="2025-07-10T00:21:50.500524626Z" level=info msg="StartContainer for \"336174d026d183214c4caba592291852a648e5a636810e5464505d3fc7a9beab\" returns successfully" Jul 10 00:21:50.517714 containerd[1592]: time="2025-07-10T00:21:50.517638370Z" level=info msg="StartContainer for \"a00fc3764efef2b7268bdb72094e3bf3e06ad81c5cdb0dd7aaeca1e3e3e23a7c\" returns successfully" Jul 10 00:21:50.564079 kubelet[2394]: I0710 00:21:50.563604 2394 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:21:51.899255 kubelet[2394]: E0710 00:21:51.899189 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:21:51.991791 kubelet[2394]: I0710 00:21:51.991745 2394 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:21:52.092860 kubelet[2394]: I0710 00:21:52.092775 2394 apiserver.go:52] "Watching apiserver" Jul 10 00:21:52.103157 kubelet[2394]: I0710 00:21:52.103100 2394 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:21:52.149882 kubelet[2394]: E0710 00:21:52.149715 2394 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:53.796178 update_engine[1576]: I20250710 00:21:53.796034 1576 update_attempter.cc:509] Updating boot flags... Jul 10 00:21:54.769431 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-7.scope)... Jul 10 00:21:54.769451 systemd[1]: Reloading... Jul 10 00:21:54.911073 zram_generator::config[2739]: No configuration found. Jul 10 00:21:55.211495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:21:55.397392 systemd[1]: Reloading finished in 627 ms. Jul 10 00:21:55.430766 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:55.455228 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:21:55.455722 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:55.455811 systemd[1]: kubelet.service: Consumed 932ms CPU time, 132.5M memory peak. Jul 10 00:21:55.458826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:21:55.728258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:21:55.740517 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:21:55.792039 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:55.792039 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:21:55.792039 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:21:55.792589 kubelet[2778]: I0710 00:21:55.792070 2778 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:21:55.802216 kubelet[2778]: I0710 00:21:55.802160 2778 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:21:55.802216 kubelet[2778]: I0710 00:21:55.802193 2778 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:21:55.802513 kubelet[2778]: I0710 00:21:55.802484 2778 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:21:55.804181 kubelet[2778]: I0710 00:21:55.804148 2778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:21:55.806515 kubelet[2778]: I0710 00:21:55.806464 2778 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:21:55.811183 kubelet[2778]: I0710 00:21:55.811152 2778 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:21:55.817228 kubelet[2778]: I0710 00:21:55.817156 2778 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:21:55.817526 kubelet[2778]: I0710 00:21:55.817279 2778 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:21:55.817526 kubelet[2778]: I0710 00:21:55.817384 2778 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:21:55.817607 kubelet[2778]: I0710 00:21:55.817410 2778 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:21:55.817607 kubelet[2778]: I0710 00:21:55.817586 2778 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:21:55.817607 kubelet[2778]: I0710 00:21:55.817597 2778 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:21:55.817786 kubelet[2778]: I0710 00:21:55.817628 2778 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:55.817786 kubelet[2778]: I0710 00:21:55.817759 2778 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:21:55.817786 kubelet[2778]: I0710 00:21:55.817780 2778 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:21:55.817901 kubelet[2778]: I0710 00:21:55.817827 2778 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:21:55.817901 kubelet[2778]: I0710 00:21:55.817843 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:21:55.818683 kubelet[2778]: I0710 00:21:55.818644 2778 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:21:55.819396 kubelet[2778]: I0710 00:21:55.819185 2778 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:21:55.820325 kubelet[2778]: I0710 00:21:55.820304 2778 server.go:1274] "Started kubelet" Jul 10 00:21:55.821718 kubelet[2778]: I0710 00:21:55.821668 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:21:55.822007 kubelet[2778]: I0710 00:21:55.821985 2778 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:21:55.824648 kubelet[2778]: I0710 00:21:55.820531 2778 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:21:55.826876 kubelet[2778]: I0710 00:21:55.826749 2778 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:21:55.830617 kubelet[2778]: I0710 00:21:55.830440 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:21:55.833431 kubelet[2778]: I0710 00:21:55.833387 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:21:55.835658 kubelet[2778]: I0710 00:21:55.835629 2778 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:21:55.837952 kubelet[2778]: I0710 00:21:55.837920 2778 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:21:55.838077 kubelet[2778]: I0710 00:21:55.838046 2778 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:21:55.838310 kubelet[2778]: I0710 00:21:55.838282 2778 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:21:55.838310 kubelet[2778]: E0710 00:21:55.835626 2778 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:21:55.838779 kubelet[2778]: I0710 00:21:55.838463 2778 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:21:55.839888 kubelet[2778]: I0710 00:21:55.839841 2778 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:21:55.853165 kubelet[2778]: I0710 00:21:55.853124 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:21:55.856820 kubelet[2778]: I0710 00:21:55.856793 2778 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:21:55.857113 kubelet[2778]: I0710 00:21:55.857089 2778 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:21:55.857247 kubelet[2778]: I0710 00:21:55.857215 2778 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:21:55.857394 kubelet[2778]: E0710 00:21:55.857363 2778 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891354 2778 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891379 2778 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891399 2778 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891538 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891548 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:21:55.891862 kubelet[2778]: I0710 00:21:55.891564 2778 policy_none.go:49] "None policy: Start" Jul 10 00:21:55.893445 kubelet[2778]: I0710 00:21:55.893428 2778 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:21:55.893697 kubelet[2778]: I0710 00:21:55.893685 2778 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:21:55.893997 kubelet[2778]: I0710 00:21:55.893981 2778 state_mem.go:75] "Updated machine memory state" Jul 10 00:21:55.899773 kubelet[2778]: I0710 00:21:55.899706 2778 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:21:55.900275 kubelet[2778]: I0710 00:21:55.900225 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:21:55.900394 kubelet[2778]: I0710 00:21:55.900241 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:21:55.900813 kubelet[2778]: I0710 00:21:55.900595 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:21:55.968893 kubelet[2778]: E0710 00:21:55.968820 2778 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:56.005709 kubelet[2778]: I0710 00:21:56.005532 2778 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:21:56.039236 kubelet[2778]: I0710 00:21:56.039168 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:56.039236 kubelet[2778]: I0710 00:21:56.039211 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:56.039236 kubelet[2778]: I0710 00:21:56.039229 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:56.039236 kubelet[2778]: I0710 00:21:56.039247 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:56.039236 kubelet[2778]: I0710 00:21:56.039266 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:56.039569 kubelet[2778]: I0710 00:21:56.039281 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0b285b368242bd781918d613f095493-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0b285b368242bd781918d613f095493\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:21:56.039569 kubelet[2778]: I0710 00:21:56.039294 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:56.039569 kubelet[2778]: I0710 00:21:56.039307 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:21:56.039569 kubelet[2778]: I0710 00:21:56.039402 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:21:56.186544 kubelet[2778]: I0710 00:21:56.186483 2778 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:21:56.186724 kubelet[2778]: I0710 00:21:56.186601 2778 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:21:56.873278 kubelet[2778]: I0710 00:21:56.873175 2778 apiserver.go:52] "Watching apiserver" Jul 10 00:21:56.886113 kubelet[2778]: E0710 00:21:56.885973 2778 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 00:21:56.896188 kubelet[2778]: I0710 00:21:56.895643 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.895500358 podStartE2EDuration="1.895500358s" podCreationTimestamp="2025-07-10 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:56.895500488 +0000 UTC m=+1.149394328" watchObservedRunningTime="2025-07-10 00:21:56.895500358 +0000 UTC m=+1.149394198" Jul 10 00:21:56.919091 kubelet[2778]: I0710 00:21:56.918996 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.918972989 podStartE2EDuration="1.918972989s" podCreationTimestamp="2025-07-10 00:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:56.908416636 +0000 UTC m=+1.162310476" watchObservedRunningTime="2025-07-10 00:21:56.918972989 +0000 UTC m=+1.172866829" Jul 10 00:21:56.938885 kubelet[2778]: I0710 00:21:56.938828 2778 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:22:01.224806 kubelet[2778]: I0710 00:22:01.224749 2778 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:22:01.225510 containerd[1592]: time="2025-07-10T00:22:01.225211676Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:22:01.225839 kubelet[2778]: I0710 00:22:01.225511 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:22:01.905423 kubelet[2778]: I0710 00:22:01.905323 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.90529568 podStartE2EDuration="7.90529568s" podCreationTimestamp="2025-07-10 00:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:21:56.919400798 +0000 UTC m=+1.173294638" watchObservedRunningTime="2025-07-10 00:22:01.90529568 +0000 UTC m=+6.159189520" Jul 10 00:22:01.915969 systemd[1]: Created slice kubepods-besteffort-podd6433e91_5eaf_4899_93dd_ad1f4eb77afd.slice - libcontainer container kubepods-besteffort-podd6433e91_5eaf_4899_93dd_ad1f4eb77afd.slice. Jul 10 00:22:02.008921 kubelet[2778]: I0710 00:22:02.008832 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6433e91-5eaf-4899-93dd-ad1f4eb77afd-lib-modules\") pod \"kube-proxy-tlx5m\" (UID: \"d6433e91-5eaf-4899-93dd-ad1f4eb77afd\") " pod="kube-system/kube-proxy-tlx5m" Jul 10 00:22:02.008921 kubelet[2778]: I0710 00:22:02.008898 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6433e91-5eaf-4899-93dd-ad1f4eb77afd-kube-proxy\") pod \"kube-proxy-tlx5m\" (UID: \"d6433e91-5eaf-4899-93dd-ad1f4eb77afd\") " pod="kube-system/kube-proxy-tlx5m" Jul 10 00:22:02.008921 kubelet[2778]: I0710 00:22:02.008918 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6433e91-5eaf-4899-93dd-ad1f4eb77afd-xtables-lock\") pod \"kube-proxy-tlx5m\" (UID: \"d6433e91-5eaf-4899-93dd-ad1f4eb77afd\") " pod="kube-system/kube-proxy-tlx5m" Jul 10 00:22:02.008921 kubelet[2778]: I0710 00:22:02.008939 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwvf\" (UniqueName: \"kubernetes.io/projected/d6433e91-5eaf-4899-93dd-ad1f4eb77afd-kube-api-access-dpwvf\") pod \"kube-proxy-tlx5m\" (UID: \"d6433e91-5eaf-4899-93dd-ad1f4eb77afd\") " pod="kube-system/kube-proxy-tlx5m" Jul 10 00:22:02.175829 systemd[1]: Created slice kubepods-besteffort-pod652dffa6_fa74_49c9_8de5_3d918df1eea9.slice - libcontainer container kubepods-besteffort-pod652dffa6_fa74_49c9_8de5_3d918df1eea9.slice. Jul 10 00:22:02.209875 kubelet[2778]: I0710 00:22:02.209751 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/652dffa6-fa74-49c9-8de5-3d918df1eea9-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-9d2bf\" (UID: \"652dffa6-fa74-49c9-8de5-3d918df1eea9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9d2bf" Jul 10 00:22:02.209875 kubelet[2778]: I0710 00:22:02.209823 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jzhs\" (UniqueName: \"kubernetes.io/projected/652dffa6-fa74-49c9-8de5-3d918df1eea9-kube-api-access-6jzhs\") pod \"tigera-operator-5bf8dfcb4-9d2bf\" (UID: \"652dffa6-fa74-49c9-8de5-3d918df1eea9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-9d2bf" Jul 10 00:22:02.236933 containerd[1592]: time="2025-07-10T00:22:02.236867463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlx5m,Uid:d6433e91-5eaf-4899-93dd-ad1f4eb77afd,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:02.286554 containerd[1592]: time="2025-07-10T00:22:02.286478473Z" level=info msg="connecting to shim a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5" address="unix:///run/containerd/s/9e1c071ea3caf734997d3b1742deb6c600a1e2f701d25f040bde2d548ae87aca" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:02.345348 systemd[1]: Started cri-containerd-a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5.scope - libcontainer container a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5. Jul 10 00:22:02.380526 containerd[1592]: time="2025-07-10T00:22:02.380459014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tlx5m,Uid:d6433e91-5eaf-4899-93dd-ad1f4eb77afd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5\"" Jul 10 00:22:02.383808 containerd[1592]: time="2025-07-10T00:22:02.383750073Z" level=info msg="CreateContainer within sandbox \"a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:22:02.398619 containerd[1592]: time="2025-07-10T00:22:02.398530079Z" level=info msg="Container d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:02.403445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246335948.mount: Deactivated successfully. Jul 10 00:22:02.410202 containerd[1592]: time="2025-07-10T00:22:02.410146145Z" level=info msg="CreateContainer within sandbox \"a758e84c2c2b083d503629eb2082ad2d0100f6beb568897e4f9a4dcfd7a37fb5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d\"" Jul 10 00:22:02.410973 containerd[1592]: time="2025-07-10T00:22:02.410874168Z" level=info msg="StartContainer for \"d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d\"" Jul 10 00:22:02.412660 containerd[1592]: time="2025-07-10T00:22:02.412624801Z" level=info msg="connecting to shim d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d" address="unix:///run/containerd/s/9e1c071ea3caf734997d3b1742deb6c600a1e2f701d25f040bde2d548ae87aca" protocol=ttrpc version=3 Jul 10 00:22:02.444269 systemd[1]: Started cri-containerd-d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d.scope - libcontainer container d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d. Jul 10 00:22:02.480964 containerd[1592]: time="2025-07-10T00:22:02.480876269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9d2bf,Uid:652dffa6-fa74-49c9-8de5-3d918df1eea9,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:22:02.746159 containerd[1592]: time="2025-07-10T00:22:02.746103358Z" level=info msg="StartContainer for \"d2358473b4fd7e313f01d2963c2b6a9ebb6c61d502711e404284646b8054b05d\" returns successfully" Jul 10 00:22:03.178818 kubelet[2778]: I0710 00:22:03.178569 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tlx5m" podStartSLOduration=2.1785373359999998 podStartE2EDuration="2.178537336s" podCreationTimestamp="2025-07-10 00:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:02.928960671 +0000 UTC m=+7.182854511" watchObservedRunningTime="2025-07-10 00:22:03.178537336 +0000 UTC m=+7.432431176" Jul 10 00:22:03.305822 containerd[1592]: time="2025-07-10T00:22:03.305752561Z" level=info msg="connecting to shim 4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885" address="unix:///run/containerd/s/572be5e3d0a6dbca31fdd7e059d9470de085df590842c17d5b2e80bf1131ee2b" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:03.339294 systemd[1]: Started cri-containerd-4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885.scope - libcontainer container 4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885. Jul 10 00:22:03.475115 containerd[1592]: time="2025-07-10T00:22:03.475067715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-9d2bf,Uid:652dffa6-fa74-49c9-8de5-3d918df1eea9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885\"" Jul 10 00:22:03.476769 containerd[1592]: time="2025-07-10T00:22:03.476740419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:22:05.448901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161727392.mount: Deactivated successfully. Jul 10 00:22:07.254726 containerd[1592]: time="2025-07-10T00:22:07.254608072Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:07.284041 containerd[1592]: time="2025-07-10T00:22:07.283874177Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 10 00:22:07.329773 containerd[1592]: time="2025-07-10T00:22:07.329675756Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:07.348484 containerd[1592]: time="2025-07-10T00:22:07.348403461Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:07.349449 containerd[1592]: time="2025-07-10T00:22:07.349327651Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.872551405s" Jul 10 00:22:07.349449 containerd[1592]: time="2025-07-10T00:22:07.349383056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 00:22:07.352621 containerd[1592]: time="2025-07-10T00:22:07.352568536Z" level=info msg="CreateContainer within sandbox \"4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:22:07.738265 containerd[1592]: time="2025-07-10T00:22:07.738177743Z" level=info msg="Container 8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:07.964443 containerd[1592]: time="2025-07-10T00:22:07.964364638Z" level=info msg="CreateContainer within sandbox \"4ced063523178351fb9b52f8bc65749cec0fc2a9a0cc4836e85b7714155dc885\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a\"" Jul 10 00:22:07.965115 containerd[1592]: time="2025-07-10T00:22:07.965055720Z" level=info msg="StartContainer for \"8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a\"" Jul 10 00:22:07.966307 containerd[1592]: time="2025-07-10T00:22:07.966252314Z" level=info msg="connecting to shim 8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a" address="unix:///run/containerd/s/572be5e3d0a6dbca31fdd7e059d9470de085df590842c17d5b2e80bf1131ee2b" protocol=ttrpc version=3 Jul 10 00:22:08.024268 systemd[1]: Started cri-containerd-8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a.scope - libcontainer container 8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a. Jul 10 00:22:08.135411 containerd[1592]: time="2025-07-10T00:22:08.135355821Z" level=info msg="StartContainer for \"8177bf08a171a2d682a46844ad0bdb9ffc563f255aa8d38c3a507cb14a099f4a\" returns successfully" Jul 10 00:22:08.917038 kubelet[2778]: I0710 00:22:08.916466 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-9d2bf" podStartSLOduration=3.042149275 podStartE2EDuration="6.916439857s" podCreationTimestamp="2025-07-10 00:22:02 +0000 UTC" firstStartedPulling="2025-07-10 00:22:03.476294248 +0000 UTC m=+7.730188088" lastFinishedPulling="2025-07-10 00:22:07.35058483 +0000 UTC m=+11.604478670" observedRunningTime="2025-07-10 00:22:08.916344217 +0000 UTC m=+13.170238057" watchObservedRunningTime="2025-07-10 00:22:08.916439857 +0000 UTC m=+13.170333697" Jul 10 00:22:14.278895 sudo[1813]: pam_unix(sudo:session): session closed for user root Jul 10 00:22:14.282644 sshd[1812]: Connection closed by 10.0.0.1 port 35582 Jul 10 00:22:14.283419 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:14.289641 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:35582.service: Deactivated successfully. Jul 10 00:22:14.292540 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:22:14.292803 systemd[1]: session-7.scope: Consumed 5.276s CPU time, 220.9M memory peak. Jul 10 00:22:14.294915 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:22:14.296905 systemd-logind[1573]: Removed session 7. Jul 10 00:22:18.070663 systemd[1]: Created slice kubepods-besteffort-pod7493d196_6e5b_4a4e_adac_44a00ae395ae.slice - libcontainer container kubepods-besteffort-pod7493d196_6e5b_4a4e_adac_44a00ae395ae.slice. Jul 10 00:22:18.117969 kubelet[2778]: I0710 00:22:18.117811 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7493d196-6e5b-4a4e-adac-44a00ae395ae-tigera-ca-bundle\") pod \"calico-typha-76c477b9fc-dts42\" (UID: \"7493d196-6e5b-4a4e-adac-44a00ae395ae\") " pod="calico-system/calico-typha-76c477b9fc-dts42" Jul 10 00:22:18.117969 kubelet[2778]: I0710 00:22:18.117871 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5q6c\" (UniqueName: \"kubernetes.io/projected/7493d196-6e5b-4a4e-adac-44a00ae395ae-kube-api-access-z5q6c\") pod \"calico-typha-76c477b9fc-dts42\" (UID: \"7493d196-6e5b-4a4e-adac-44a00ae395ae\") " pod="calico-system/calico-typha-76c477b9fc-dts42" Jul 10 00:22:18.117969 kubelet[2778]: I0710 00:22:18.117889 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7493d196-6e5b-4a4e-adac-44a00ae395ae-typha-certs\") pod \"calico-typha-76c477b9fc-dts42\" (UID: \"7493d196-6e5b-4a4e-adac-44a00ae395ae\") " pod="calico-system/calico-typha-76c477b9fc-dts42" Jul 10 00:22:18.417560 containerd[1592]: time="2025-07-10T00:22:18.417387609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c477b9fc-dts42,Uid:7493d196-6e5b-4a4e-adac-44a00ae395ae,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:18.462885 systemd[1]: Created slice kubepods-besteffort-podd9bc261e_81d4_40fa_89a4_1eedd98cec56.slice - libcontainer container kubepods-besteffort-podd9bc261e_81d4_40fa_89a4_1eedd98cec56.slice. Jul 10 00:22:18.478890 containerd[1592]: time="2025-07-10T00:22:18.478513689Z" level=info msg="connecting to shim 3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c" address="unix:///run/containerd/s/2be9cdc70a010623458c689dce34aac5f52d87099a4dfe3c6b088bf4a13443a9" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:18.523799 kubelet[2778]: I0710 00:22:18.523189 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-var-lib-calico\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.523799 kubelet[2778]: I0710 00:22:18.523299 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-lib-modules\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.523799 kubelet[2778]: I0710 00:22:18.523334 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-xtables-lock\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.523799 kubelet[2778]: I0710 00:22:18.523398 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-cni-log-dir\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.523799 kubelet[2778]: I0710 00:22:18.523454 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9bc261e-81d4-40fa-89a4-1eedd98cec56-tigera-ca-bundle\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524158 kubelet[2778]: I0710 00:22:18.523470 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-var-run-calico\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524158 kubelet[2778]: I0710 00:22:18.523488 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-cni-bin-dir\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524158 kubelet[2778]: I0710 00:22:18.523536 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9bc261e-81d4-40fa-89a4-1eedd98cec56-node-certs\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524158 kubelet[2778]: I0710 00:22:18.523566 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5xnn\" (UniqueName: \"kubernetes.io/projected/d9bc261e-81d4-40fa-89a4-1eedd98cec56-kube-api-access-s5xnn\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524158 kubelet[2778]: I0710 00:22:18.523615 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-cni-net-dir\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524346 kubelet[2778]: I0710 00:22:18.523707 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-flexvol-driver-host\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.524346 kubelet[2778]: I0710 00:22:18.523728 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9bc261e-81d4-40fa-89a4-1eedd98cec56-policysync\") pod \"calico-node-vkndk\" (UID: \"d9bc261e-81d4-40fa-89a4-1eedd98cec56\") " pod="calico-system/calico-node-vkndk" Jul 10 00:22:18.533558 systemd[1]: Started cri-containerd-3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c.scope - libcontainer container 3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c. Jul 10 00:22:18.597831 containerd[1592]: time="2025-07-10T00:22:18.597780879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c477b9fc-dts42,Uid:7493d196-6e5b-4a4e-adac-44a00ae395ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c\"" Jul 10 00:22:18.599940 containerd[1592]: time="2025-07-10T00:22:18.599887347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:22:18.628634 kubelet[2778]: E0710 00:22:18.628425 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.628634 kubelet[2778]: W0710 00:22:18.628488 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.630364 kubelet[2778]: E0710 00:22:18.630298 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.631095 kubelet[2778]: E0710 00:22:18.630975 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.631095 kubelet[2778]: W0710 00:22:18.631003 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.631095 kubelet[2778]: E0710 00:22:18.631049 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.637588 kubelet[2778]: E0710 00:22:18.637549 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.637588 kubelet[2778]: W0710 00:22:18.637579 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.637719 kubelet[2778]: E0710 00:22:18.637606 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.690507 kubelet[2778]: E0710 00:22:18.689940 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:18.769434 containerd[1592]: time="2025-07-10T00:22:18.769378837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vkndk,Uid:d9bc261e-81d4-40fa-89a4-1eedd98cec56,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:18.788296 kubelet[2778]: E0710 00:22:18.788257 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.788296 kubelet[2778]: W0710 00:22:18.788286 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.788296 kubelet[2778]: E0710 00:22:18.788324 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.788701 kubelet[2778]: E0710 00:22:18.788672 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.788744 kubelet[2778]: W0710 00:22:18.788699 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.788744 kubelet[2778]: E0710 00:22:18.788731 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.788990 kubelet[2778]: E0710 00:22:18.788972 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.788990 kubelet[2778]: W0710 00:22:18.788983 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.788990 kubelet[2778]: E0710 00:22:18.788993 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.789226 kubelet[2778]: E0710 00:22:18.789202 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.789226 kubelet[2778]: W0710 00:22:18.789218 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.789226 kubelet[2778]: E0710 00:22:18.789227 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.789436 kubelet[2778]: E0710 00:22:18.789411 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.789436 kubelet[2778]: W0710 00:22:18.789422 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.789759 kubelet[2778]: E0710 00:22:18.789454 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.789837 kubelet[2778]: E0710 00:22:18.789813 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.789837 kubelet[2778]: W0710 00:22:18.789824 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.789837 kubelet[2778]: E0710 00:22:18.789834 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.790157 kubelet[2778]: E0710 00:22:18.790092 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.790157 kubelet[2778]: W0710 00:22:18.790104 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.790157 kubelet[2778]: E0710 00:22:18.790142 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.791173 kubelet[2778]: E0710 00:22:18.791081 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.791173 kubelet[2778]: W0710 00:22:18.791093 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.791173 kubelet[2778]: E0710 00:22:18.791104 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.791940 kubelet[2778]: E0710 00:22:18.791903 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.791940 kubelet[2778]: W0710 00:22:18.791916 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.791940 kubelet[2778]: E0710 00:22:18.791927 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.792152 kubelet[2778]: E0710 00:22:18.792134 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.792152 kubelet[2778]: W0710 00:22:18.792145 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.792228 kubelet[2778]: E0710 00:22:18.792155 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.792969 kubelet[2778]: E0710 00:22:18.792912 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.792969 kubelet[2778]: W0710 00:22:18.792934 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.792969 kubelet[2778]: E0710 00:22:18.792948 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.793165 kubelet[2778]: E0710 00:22:18.793145 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.793165 kubelet[2778]: W0710 00:22:18.793157 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.793224 kubelet[2778]: E0710 00:22:18.793168 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.793681 kubelet[2778]: E0710 00:22:18.793653 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.793681 kubelet[2778]: W0710 00:22:18.793670 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.793681 kubelet[2778]: E0710 00:22:18.793680 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.794234 kubelet[2778]: E0710 00:22:18.794190 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.794234 kubelet[2778]: W0710 00:22:18.794205 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.794234 kubelet[2778]: E0710 00:22:18.794215 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.794454 kubelet[2778]: E0710 00:22:18.794402 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.794454 kubelet[2778]: W0710 00:22:18.794419 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.794454 kubelet[2778]: E0710 00:22:18.794428 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.795038 kubelet[2778]: E0710 00:22:18.794956 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.795038 kubelet[2778]: W0710 00:22:18.794971 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.795038 kubelet[2778]: E0710 00:22:18.794982 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.795348 kubelet[2778]: E0710 00:22:18.795273 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.795348 kubelet[2778]: W0710 00:22:18.795287 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.795348 kubelet[2778]: E0710 00:22:18.795296 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.795654 kubelet[2778]: E0710 00:22:18.795556 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.795654 kubelet[2778]: W0710 00:22:18.795566 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.795654 kubelet[2778]: E0710 00:22:18.795575 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.795760 kubelet[2778]: E0710 00:22:18.795735 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.795760 kubelet[2778]: W0710 00:22:18.795742 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.795760 kubelet[2778]: E0710 00:22:18.795751 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.795931 kubelet[2778]: E0710 00:22:18.795913 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.795931 kubelet[2778]: W0710 00:22:18.795926 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.796071 kubelet[2778]: E0710 00:22:18.795934 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.802588 containerd[1592]: time="2025-07-10T00:22:18.802528007Z" level=info msg="connecting to shim c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88" address="unix:///run/containerd/s/2ba9bbdaccc69003742ec7d3065dfe7c3c28adf9ea7aa8af64ff67a88a7fbb24" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:18.825823 kubelet[2778]: E0710 00:22:18.825789 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.826174 kubelet[2778]: W0710 00:22:18.825980 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.826174 kubelet[2778]: E0710 00:22:18.826024 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.826174 kubelet[2778]: I0710 00:22:18.826064 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a12e8b81-dd74-48fe-8fc0-6670947a4f3a-registration-dir\") pod \"csi-node-driver-88rhw\" (UID: \"a12e8b81-dd74-48fe-8fc0-6670947a4f3a\") " pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:18.826329 kubelet[2778]: E0710 00:22:18.826290 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.826329 kubelet[2778]: W0710 00:22:18.826324 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.826397 kubelet[2778]: E0710 00:22:18.826350 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.826397 kubelet[2778]: I0710 00:22:18.826389 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a12e8b81-dd74-48fe-8fc0-6670947a4f3a-socket-dir\") pod \"csi-node-driver-88rhw\" (UID: \"a12e8b81-dd74-48fe-8fc0-6670947a4f3a\") " pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:18.826675 kubelet[2778]: E0710 00:22:18.826645 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.826675 kubelet[2778]: W0710 00:22:18.826659 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.826733 kubelet[2778]: E0710 00:22:18.826681 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.826733 kubelet[2778]: I0710 00:22:18.826695 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jppv7\" (UniqueName: \"kubernetes.io/projected/a12e8b81-dd74-48fe-8fc0-6670947a4f3a-kube-api-access-jppv7\") pod \"csi-node-driver-88rhw\" (UID: \"a12e8b81-dd74-48fe-8fc0-6670947a4f3a\") " pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:18.826961 kubelet[2778]: E0710 00:22:18.826931 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.826961 kubelet[2778]: W0710 00:22:18.826947 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.826961 kubelet[2778]: E0710 00:22:18.826960 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.827106 kubelet[2778]: I0710 00:22:18.826974 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a12e8b81-dd74-48fe-8fc0-6670947a4f3a-varrun\") pod \"csi-node-driver-88rhw\" (UID: \"a12e8b81-dd74-48fe-8fc0-6670947a4f3a\") " pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:18.827280 kubelet[2778]: E0710 00:22:18.827256 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.827280 kubelet[2778]: W0710 00:22:18.827273 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.827378 kubelet[2778]: E0710 00:22:18.827295 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.827535 kubelet[2778]: E0710 00:22:18.827515 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.827535 kubelet[2778]: W0710 00:22:18.827528 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.827601 kubelet[2778]: E0710 00:22:18.827548 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.827771 kubelet[2778]: E0710 00:22:18.827749 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.827771 kubelet[2778]: W0710 00:22:18.827766 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.827840 kubelet[2778]: E0710 00:22:18.827785 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.828073 kubelet[2778]: E0710 00:22:18.828052 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.828073 kubelet[2778]: W0710 00:22:18.828065 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.828172 kubelet[2778]: E0710 00:22:18.828150 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.828377 kubelet[2778]: E0710 00:22:18.828359 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.828407 kubelet[2778]: W0710 00:22:18.828375 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.828447 kubelet[2778]: E0710 00:22:18.828428 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.828628 kubelet[2778]: E0710 00:22:18.828611 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.828651 kubelet[2778]: W0710 00:22:18.828626 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.828718 kubelet[2778]: E0710 00:22:18.828689 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.828745 kubelet[2778]: I0710 00:22:18.828728 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a12e8b81-dd74-48fe-8fc0-6670947a4f3a-kubelet-dir\") pod \"csi-node-driver-88rhw\" (UID: \"a12e8b81-dd74-48fe-8fc0-6670947a4f3a\") " pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:18.828905 kubelet[2778]: E0710 00:22:18.828889 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.828905 kubelet[2778]: W0710 00:22:18.828902 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.829006 kubelet[2778]: E0710 00:22:18.828987 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.829189 kubelet[2778]: E0710 00:22:18.829171 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.829189 kubelet[2778]: W0710 00:22:18.829185 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.829244 kubelet[2778]: E0710 00:22:18.829195 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.829517 kubelet[2778]: E0710 00:22:18.829491 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.829517 kubelet[2778]: W0710 00:22:18.829505 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.829568 kubelet[2778]: E0710 00:22:18.829532 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.829751 kubelet[2778]: E0710 00:22:18.829728 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.829751 kubelet[2778]: W0710 00:22:18.829740 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.829751 kubelet[2778]: E0710 00:22:18.829750 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.830029 kubelet[2778]: E0710 00:22:18.829998 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.830069 kubelet[2778]: W0710 00:22:18.830049 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.830069 kubelet[2778]: E0710 00:22:18.830065 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.831209 systemd[1]: Started cri-containerd-c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88.scope - libcontainer container c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88. Jul 10 00:22:18.861887 containerd[1592]: time="2025-07-10T00:22:18.861828977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vkndk,Uid:d9bc261e-81d4-40fa-89a4-1eedd98cec56,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\"" Jul 10 00:22:18.930323 kubelet[2778]: E0710 00:22:18.930272 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.930323 kubelet[2778]: W0710 00:22:18.930295 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.930323 kubelet[2778]: E0710 00:22:18.930331 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.930573 kubelet[2778]: E0710 00:22:18.930558 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.930573 kubelet[2778]: W0710 00:22:18.930568 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.930648 kubelet[2778]: E0710 00:22:18.930583 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.930824 kubelet[2778]: E0710 00:22:18.930806 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.930824 kubelet[2778]: W0710 00:22:18.930819 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.930885 kubelet[2778]: E0710 00:22:18.930840 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.931201 kubelet[2778]: E0710 00:22:18.931160 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.931201 kubelet[2778]: W0710 00:22:18.931186 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.931438 kubelet[2778]: E0710 00:22:18.931219 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.931438 kubelet[2778]: E0710 00:22:18.931437 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.931494 kubelet[2778]: W0710 00:22:18.931446 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.931494 kubelet[2778]: E0710 00:22:18.931456 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.931690 kubelet[2778]: E0710 00:22:18.931670 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.931690 kubelet[2778]: W0710 00:22:18.931684 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.931758 kubelet[2778]: E0710 00:22:18.931701 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.931981 kubelet[2778]: E0710 00:22:18.931949 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.931981 kubelet[2778]: W0710 00:22:18.931966 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.932062 kubelet[2778]: E0710 00:22:18.932001 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.932206 kubelet[2778]: E0710 00:22:18.932189 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.932206 kubelet[2778]: W0710 00:22:18.932202 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.932255 kubelet[2778]: E0710 00:22:18.932231 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.932375 kubelet[2778]: E0710 00:22:18.932359 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.932375 kubelet[2778]: W0710 00:22:18.932369 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.932417 kubelet[2778]: E0710 00:22:18.932394 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.932572 kubelet[2778]: E0710 00:22:18.932555 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.932572 kubelet[2778]: W0710 00:22:18.932567 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.932628 kubelet[2778]: E0710 00:22:18.932599 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.932755 kubelet[2778]: E0710 00:22:18.932737 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.932755 kubelet[2778]: W0710 00:22:18.932750 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.932806 kubelet[2778]: E0710 00:22:18.932768 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.932966 kubelet[2778]: E0710 00:22:18.932949 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.932966 kubelet[2778]: W0710 00:22:18.932961 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.933057 kubelet[2778]: E0710 00:22:18.932973 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.933186 kubelet[2778]: E0710 00:22:18.933169 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.933186 kubelet[2778]: W0710 00:22:18.933182 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.933228 kubelet[2778]: E0710 00:22:18.933199 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.933471 kubelet[2778]: E0710 00:22:18.933452 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.933471 kubelet[2778]: W0710 00:22:18.933464 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.933529 kubelet[2778]: E0710 00:22:18.933480 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.933677 kubelet[2778]: E0710 00:22:18.933659 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.933677 kubelet[2778]: W0710 00:22:18.933671 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.933733 kubelet[2778]: E0710 00:22:18.933686 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.933881 kubelet[2778]: E0710 00:22:18.933863 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.933881 kubelet[2778]: W0710 00:22:18.933877 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.933932 kubelet[2778]: E0710 00:22:18.933908 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.934084 kubelet[2778]: E0710 00:22:18.934068 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.934084 kubelet[2778]: W0710 00:22:18.934079 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.934132 kubelet[2778]: E0710 00:22:18.934111 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.934287 kubelet[2778]: E0710 00:22:18.934270 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.934287 kubelet[2778]: W0710 00:22:18.934281 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.934354 kubelet[2778]: E0710 00:22:18.934319 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.934470 kubelet[2778]: E0710 00:22:18.934455 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.934470 kubelet[2778]: W0710 00:22:18.934465 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.934520 kubelet[2778]: E0710 00:22:18.934477 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.934681 kubelet[2778]: E0710 00:22:18.934665 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.934681 kubelet[2778]: W0710 00:22:18.934676 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.934723 kubelet[2778]: E0710 00:22:18.934688 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.934871 kubelet[2778]: E0710 00:22:18.934854 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.934871 kubelet[2778]: W0710 00:22:18.934866 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.934927 kubelet[2778]: E0710 00:22:18.934880 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.935128 kubelet[2778]: E0710 00:22:18.935109 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.935128 kubelet[2778]: W0710 00:22:18.935124 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.935197 kubelet[2778]: E0710 00:22:18.935142 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.935380 kubelet[2778]: E0710 00:22:18.935361 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.935380 kubelet[2778]: W0710 00:22:18.935376 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.935438 kubelet[2778]: E0710 00:22:18.935410 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.935576 kubelet[2778]: E0710 00:22:18.935560 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.935576 kubelet[2778]: W0710 00:22:18.935571 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.935628 kubelet[2778]: E0710 00:22:18.935602 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.935771 kubelet[2778]: E0710 00:22:18.935749 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.935814 kubelet[2778]: W0710 00:22:18.935763 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.935814 kubelet[2778]: E0710 00:22:18.935786 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:18.944673 kubelet[2778]: E0710 00:22:18.944553 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:18.944673 kubelet[2778]: W0710 00:22:18.944572 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:18.944673 kubelet[2778]: E0710 00:22:18.944593 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:20.159554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451382774.mount: Deactivated successfully. Jul 10 00:22:20.571538 containerd[1592]: time="2025-07-10T00:22:20.571473274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:20.572531 containerd[1592]: time="2025-07-10T00:22:20.572495243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 10 00:22:20.574155 containerd[1592]: time="2025-07-10T00:22:20.574088167Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:20.576226 containerd[1592]: time="2025-07-10T00:22:20.576180137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:20.576878 containerd[1592]: time="2025-07-10T00:22:20.576838605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.9768989s" Jul 10 00:22:20.576878 containerd[1592]: time="2025-07-10T00:22:20.576876836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 00:22:20.578710 containerd[1592]: time="2025-07-10T00:22:20.578614611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:22:20.591615 containerd[1592]: time="2025-07-10T00:22:20.591557109Z" level=info msg="CreateContainer within sandbox \"3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:22:20.601169 containerd[1592]: time="2025-07-10T00:22:20.601117362Z" level=info msg="Container 0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:20.616404 containerd[1592]: time="2025-07-10T00:22:20.616332489Z" level=info msg="CreateContainer within sandbox \"3290561d44c1fb02a89bbeb3225b3c37bca343e3b774c6593bbef383a6809c6c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8\"" Jul 10 00:22:20.616938 containerd[1592]: time="2025-07-10T00:22:20.616853158Z" level=info msg="StartContainer for \"0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8\"" Jul 10 00:22:20.618310 containerd[1592]: time="2025-07-10T00:22:20.618260913Z" level=info msg="connecting to shim 0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8" address="unix:///run/containerd/s/2be9cdc70a010623458c689dce34aac5f52d87099a4dfe3c6b088bf4a13443a9" protocol=ttrpc version=3 Jul 10 00:22:20.646558 systemd[1]: Started cri-containerd-0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8.scope - libcontainer container 0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8. Jul 10 00:22:20.697436 containerd[1592]: time="2025-07-10T00:22:20.697387364Z" level=info msg="StartContainer for \"0151dd3f45a65f22401ed1b80645b3ce157287606a7fe6931b467b75084358d8\" returns successfully" Jul 10 00:22:20.858141 kubelet[2778]: E0710 00:22:20.857948 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:20.974100 kubelet[2778]: I0710 00:22:20.973422 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c477b9fc-dts42" podStartSLOduration=0.995106935 podStartE2EDuration="2.973360289s" podCreationTimestamp="2025-07-10 00:22:18 +0000 UTC" firstStartedPulling="2025-07-10 00:22:18.599485602 +0000 UTC m=+22.853379442" lastFinishedPulling="2025-07-10 00:22:20.577738946 +0000 UTC m=+24.831632796" observedRunningTime="2025-07-10 00:22:20.973034998 +0000 UTC m=+25.226928838" watchObservedRunningTime="2025-07-10 00:22:20.973360289 +0000 UTC m=+25.227254129" Jul 10 00:22:21.009727 kubelet[2778]: E0710 00:22:21.009688 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.009727 kubelet[2778]: W0710 00:22:21.009716 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.009943 kubelet[2778]: E0710 00:22:21.009742 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.010057 kubelet[2778]: E0710 00:22:21.010039 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.010057 kubelet[2778]: W0710 00:22:21.010054 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.010116 kubelet[2778]: E0710 00:22:21.010065 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.010312 kubelet[2778]: E0710 00:22:21.010297 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.010312 kubelet[2778]: W0710 00:22:21.010308 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.010398 kubelet[2778]: E0710 00:22:21.010317 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.010559 kubelet[2778]: E0710 00:22:21.010538 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.010559 kubelet[2778]: W0710 00:22:21.010552 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.010676 kubelet[2778]: E0710 00:22:21.010563 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.010766 kubelet[2778]: E0710 00:22:21.010748 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.010766 kubelet[2778]: W0710 00:22:21.010760 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.010858 kubelet[2778]: E0710 00:22:21.010770 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.010944 kubelet[2778]: E0710 00:22:21.010927 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.010944 kubelet[2778]: W0710 00:22:21.010938 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.010944 kubelet[2778]: E0710 00:22:21.010947 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.011130 kubelet[2778]: E0710 00:22:21.011109 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.011130 kubelet[2778]: W0710 00:22:21.011117 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.011130 kubelet[2778]: E0710 00:22:21.011126 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.011289 kubelet[2778]: E0710 00:22:21.011272 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.011289 kubelet[2778]: W0710 00:22:21.011281 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.011289 kubelet[2778]: E0710 00:22:21.011288 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.011451 kubelet[2778]: E0710 00:22:21.011439 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.011451 kubelet[2778]: W0710 00:22:21.011449 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.011542 kubelet[2778]: E0710 00:22:21.011457 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.011649 kubelet[2778]: E0710 00:22:21.011632 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.011649 kubelet[2778]: W0710 00:22:21.011640 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.011726 kubelet[2778]: E0710 00:22:21.011658 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.011881 kubelet[2778]: E0710 00:22:21.011862 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.011881 kubelet[2778]: W0710 00:22:21.011873 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.011974 kubelet[2778]: E0710 00:22:21.011885 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.012164 kubelet[2778]: E0710 00:22:21.012138 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.012164 kubelet[2778]: W0710 00:22:21.012159 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.012274 kubelet[2778]: E0710 00:22:21.012171 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.012383 kubelet[2778]: E0710 00:22:21.012366 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.012383 kubelet[2778]: W0710 00:22:21.012377 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.012383 kubelet[2778]: E0710 00:22:21.012386 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.013106 kubelet[2778]: E0710 00:22:21.013085 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.013106 kubelet[2778]: W0710 00:22:21.013099 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.013185 kubelet[2778]: E0710 00:22:21.013109 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.013398 kubelet[2778]: E0710 00:22:21.013367 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.013398 kubelet[2778]: W0710 00:22:21.013381 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.013398 kubelet[2778]: E0710 00:22:21.013395 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.047193 kubelet[2778]: E0710 00:22:21.047160 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.047193 kubelet[2778]: W0710 00:22:21.047182 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.047193 kubelet[2778]: E0710 00:22:21.047204 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.047458 kubelet[2778]: E0710 00:22:21.047427 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.047458 kubelet[2778]: W0710 00:22:21.047452 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.047561 kubelet[2778]: E0710 00:22:21.047470 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.047756 kubelet[2778]: E0710 00:22:21.047689 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.047756 kubelet[2778]: W0710 00:22:21.047702 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.047756 kubelet[2778]: E0710 00:22:21.047719 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.047973 kubelet[2778]: E0710 00:22:21.047949 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.047973 kubelet[2778]: W0710 00:22:21.047960 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.048089 kubelet[2778]: E0710 00:22:21.047977 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.048220 kubelet[2778]: E0710 00:22:21.048199 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.048220 kubelet[2778]: W0710 00:22:21.048211 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.048318 kubelet[2778]: E0710 00:22:21.048228 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.048442 kubelet[2778]: E0710 00:22:21.048422 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.048442 kubelet[2778]: W0710 00:22:21.048433 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.048544 kubelet[2778]: E0710 00:22:21.048450 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.048686 kubelet[2778]: E0710 00:22:21.048665 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.048686 kubelet[2778]: W0710 00:22:21.048679 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.048770 kubelet[2778]: E0710 00:22:21.048721 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.048903 kubelet[2778]: E0710 00:22:21.048885 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.048903 kubelet[2778]: W0710 00:22:21.048899 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.048986 kubelet[2778]: E0710 00:22:21.048939 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.049145 kubelet[2778]: E0710 00:22:21.049129 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.049145 kubelet[2778]: W0710 00:22:21.049143 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.049214 kubelet[2778]: E0710 00:22:21.049160 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.049366 kubelet[2778]: E0710 00:22:21.049344 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.049366 kubelet[2778]: W0710 00:22:21.049357 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.049474 kubelet[2778]: E0710 00:22:21.049372 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.049545 kubelet[2778]: E0710 00:22:21.049531 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.049545 kubelet[2778]: W0710 00:22:21.049539 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.049609 kubelet[2778]: E0710 00:22:21.049551 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.049731 kubelet[2778]: E0710 00:22:21.049713 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.049731 kubelet[2778]: W0710 00:22:21.049722 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.049808 kubelet[2778]: E0710 00:22:21.049735 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.049957 kubelet[2778]: E0710 00:22:21.049940 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.049957 kubelet[2778]: W0710 00:22:21.049952 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050078 kubelet[2778]: E0710 00:22:21.049968 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.050165 kubelet[2778]: E0710 00:22:21.050147 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.050165 kubelet[2778]: W0710 00:22:21.050157 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050236 kubelet[2778]: E0710 00:22:21.050169 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.050365 kubelet[2778]: E0710 00:22:21.050346 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.050365 kubelet[2778]: W0710 00:22:21.050355 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050447 kubelet[2778]: E0710 00:22:21.050385 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.050530 kubelet[2778]: E0710 00:22:21.050510 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.050530 kubelet[2778]: W0710 00:22:21.050528 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050616 kubelet[2778]: E0710 00:22:21.050551 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.050705 kubelet[2778]: E0710 00:22:21.050684 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.050705 kubelet[2778]: W0710 00:22:21.050697 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050775 kubelet[2778]: E0710 00:22:21.050715 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.050914 kubelet[2778]: E0710 00:22:21.050894 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:21.050914 kubelet[2778]: W0710 00:22:21.050904 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:21.050914 kubelet[2778]: E0710 00:22:21.050912 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:21.957224 kubelet[2778]: I0710 00:22:21.957175 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:22:22.021003 kubelet[2778]: E0710 00:22:22.020927 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.021003 kubelet[2778]: W0710 00:22:22.020961 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.021003 kubelet[2778]: E0710 00:22:22.020989 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.021329 kubelet[2778]: E0710 00:22:22.021297 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.021329 kubelet[2778]: W0710 00:22:22.021313 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.021329 kubelet[2778]: E0710 00:22:22.021323 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.021580 kubelet[2778]: E0710 00:22:22.021551 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.021580 kubelet[2778]: W0710 00:22:22.021566 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.021580 kubelet[2778]: E0710 00:22:22.021575 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.021797 kubelet[2778]: E0710 00:22:22.021764 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.021797 kubelet[2778]: W0710 00:22:22.021775 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.021797 kubelet[2778]: E0710 00:22:22.021784 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.022118 kubelet[2778]: E0710 00:22:22.021983 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.022118 kubelet[2778]: W0710 00:22:22.021990 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.022118 kubelet[2778]: E0710 00:22:22.021999 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.022261 kubelet[2778]: E0710 00:22:22.022225 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.022261 kubelet[2778]: W0710 00:22:22.022249 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.022261 kubelet[2778]: E0710 00:22:22.022260 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.022474 kubelet[2778]: E0710 00:22:22.022441 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.022474 kubelet[2778]: W0710 00:22:22.022453 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.022474 kubelet[2778]: E0710 00:22:22.022461 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.022686 kubelet[2778]: E0710 00:22:22.022649 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.022686 kubelet[2778]: W0710 00:22:22.022667 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.022686 kubelet[2778]: E0710 00:22:22.022675 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.022873 kubelet[2778]: E0710 00:22:22.022853 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.022873 kubelet[2778]: W0710 00:22:22.022865 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.022873 kubelet[2778]: E0710 00:22:22.022873 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.023085 kubelet[2778]: E0710 00:22:22.023066 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.023085 kubelet[2778]: W0710 00:22:22.023076 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.023085 kubelet[2778]: E0710 00:22:22.023084 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.023295 kubelet[2778]: E0710 00:22:22.023271 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.023295 kubelet[2778]: W0710 00:22:22.023281 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.023295 kubelet[2778]: E0710 00:22:22.023290 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.023485 kubelet[2778]: E0710 00:22:22.023463 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.023485 kubelet[2778]: W0710 00:22:22.023475 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.023485 kubelet[2778]: E0710 00:22:22.023483 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.023676 kubelet[2778]: E0710 00:22:22.023655 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.023676 kubelet[2778]: W0710 00:22:22.023666 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.023676 kubelet[2778]: E0710 00:22:22.023674 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.023864 kubelet[2778]: E0710 00:22:22.023843 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.023864 kubelet[2778]: W0710 00:22:22.023854 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.023864 kubelet[2778]: E0710 00:22:22.023863 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.024073 kubelet[2778]: E0710 00:22:22.024050 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.024073 kubelet[2778]: W0710 00:22:22.024064 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.024073 kubelet[2778]: E0710 00:22:22.024073 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.054696 kubelet[2778]: E0710 00:22:22.054621 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.054696 kubelet[2778]: W0710 00:22:22.054657 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.054696 kubelet[2778]: E0710 00:22:22.054686 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.054971 kubelet[2778]: E0710 00:22:22.054960 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.054971 kubelet[2778]: W0710 00:22:22.054972 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.055060 kubelet[2778]: E0710 00:22:22.054993 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.055379 kubelet[2778]: E0710 00:22:22.055338 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.055379 kubelet[2778]: W0710 00:22:22.055366 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.055481 kubelet[2778]: E0710 00:22:22.055405 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.055724 kubelet[2778]: E0710 00:22:22.055694 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.055724 kubelet[2778]: W0710 00:22:22.055707 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.055811 kubelet[2778]: E0710 00:22:22.055725 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.055965 kubelet[2778]: E0710 00:22:22.055936 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.055965 kubelet[2778]: W0710 00:22:22.055949 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.056068 kubelet[2778]: E0710 00:22:22.055966 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.056258 kubelet[2778]: E0710 00:22:22.056224 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.056258 kubelet[2778]: W0710 00:22:22.056248 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.056341 kubelet[2778]: E0710 00:22:22.056266 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.056502 kubelet[2778]: E0710 00:22:22.056482 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.056502 kubelet[2778]: W0710 00:22:22.056496 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.056585 kubelet[2778]: E0710 00:22:22.056513 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.057129 kubelet[2778]: E0710 00:22:22.057099 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.057129 kubelet[2778]: W0710 00:22:22.057112 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.057129 kubelet[2778]: E0710 00:22:22.057125 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.057360 kubelet[2778]: E0710 00:22:22.057338 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.057360 kubelet[2778]: W0710 00:22:22.057350 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.057360 kubelet[2778]: E0710 00:22:22.057362 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.057547 kubelet[2778]: E0710 00:22:22.057532 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.057547 kubelet[2778]: W0710 00:22:22.057543 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.057637 kubelet[2778]: E0710 00:22:22.057557 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.057773 kubelet[2778]: E0710 00:22:22.057752 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.057773 kubelet[2778]: W0710 00:22:22.057766 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.057861 kubelet[2778]: E0710 00:22:22.057783 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.058042 kubelet[2778]: E0710 00:22:22.058003 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.058042 kubelet[2778]: W0710 00:22:22.058035 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.058120 kubelet[2778]: E0710 00:22:22.058054 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.058379 kubelet[2778]: E0710 00:22:22.058333 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.058379 kubelet[2778]: W0710 00:22:22.058349 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.058379 kubelet[2778]: E0710 00:22:22.058371 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.058603 kubelet[2778]: E0710 00:22:22.058582 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.058603 kubelet[2778]: W0710 00:22:22.058599 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.058679 kubelet[2778]: E0710 00:22:22.058620 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.058900 kubelet[2778]: E0710 00:22:22.058886 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.058900 kubelet[2778]: W0710 00:22:22.058898 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.058960 kubelet[2778]: E0710 00:22:22.058917 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.059334 kubelet[2778]: E0710 00:22:22.059310 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.059334 kubelet[2778]: W0710 00:22:22.059324 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.059438 kubelet[2778]: E0710 00:22:22.059339 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.059652 kubelet[2778]: E0710 00:22:22.059617 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.059652 kubelet[2778]: W0710 00:22:22.059633 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.059652 kubelet[2778]: E0710 00:22:22.059645 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.060302 kubelet[2778]: E0710 00:22:22.060256 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:22:22.060302 kubelet[2778]: W0710 00:22:22.060274 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:22:22.060302 kubelet[2778]: E0710 00:22:22.060287 2778 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:22:22.440753 containerd[1592]: time="2025-07-10T00:22:22.440651854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:22.441621 containerd[1592]: time="2025-07-10T00:22:22.441517510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 10 00:22:22.442992 containerd[1592]: time="2025-07-10T00:22:22.442934132Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:22.445945 containerd[1592]: time="2025-07-10T00:22:22.445862553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:22.446639 containerd[1592]: time="2025-07-10T00:22:22.446604767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.867945151s" Jul 10 00:22:22.446697 containerd[1592]: time="2025-07-10T00:22:22.446646084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 00:22:22.448806 containerd[1592]: time="2025-07-10T00:22:22.448766838Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:22:22.457579 containerd[1592]: time="2025-07-10T00:22:22.457508589Z" level=info msg="Container 2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:22.468659 containerd[1592]: time="2025-07-10T00:22:22.468609052Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\"" Jul 10 00:22:22.469372 containerd[1592]: time="2025-07-10T00:22:22.469332941Z" level=info msg="StartContainer for \"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\"" Jul 10 00:22:22.471110 containerd[1592]: time="2025-07-10T00:22:22.471081565Z" level=info msg="connecting to shim 2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee" address="unix:///run/containerd/s/2ba9bbdaccc69003742ec7d3065dfe7c3c28adf9ea7aa8af64ff67a88a7fbb24" protocol=ttrpc version=3 Jul 10 00:22:22.506189 systemd[1]: Started cri-containerd-2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee.scope - libcontainer container 2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee. Jul 10 00:22:22.551441 containerd[1592]: time="2025-07-10T00:22:22.551214403Z" level=info msg="StartContainer for \"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\" returns successfully" Jul 10 00:22:22.563332 systemd[1]: cri-containerd-2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee.scope: Deactivated successfully. Jul 10 00:22:22.563715 systemd[1]: cri-containerd-2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee.scope: Consumed 41ms CPU time, 6.5M memory peak, 3M written to disk. Jul 10 00:22:22.566381 containerd[1592]: time="2025-07-10T00:22:22.566335668Z" level=info msg="received exit event container_id:\"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\" id:\"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\" pid:3500 exited_at:{seconds:1752106942 nanos:565837572}" Jul 10 00:22:22.566467 containerd[1592]: time="2025-07-10T00:22:22.566401291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\" id:\"2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee\" pid:3500 exited_at:{seconds:1752106942 nanos:565837572}" Jul 10 00:22:22.593878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df4d6b48134faa8e76a14166bcba58d7f84855a79ba454e595d3667c2254eee-rootfs.mount: Deactivated successfully. Jul 10 00:22:22.858696 kubelet[2778]: E0710 00:22:22.858602 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:23.972820 containerd[1592]: time="2025-07-10T00:22:23.972765056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:22:24.858469 kubelet[2778]: E0710 00:22:24.858387 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:25.395269 kubelet[2778]: I0710 00:22:25.395221 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:22:26.858567 kubelet[2778]: E0710 00:22:26.858487 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:28.859275 kubelet[2778]: E0710 00:22:28.859185 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:29.405258 containerd[1592]: time="2025-07-10T00:22:29.405185578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:29.513418 containerd[1592]: time="2025-07-10T00:22:29.513338857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 10 00:22:29.592252 containerd[1592]: time="2025-07-10T00:22:29.592214504Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:29.656695 containerd[1592]: time="2025-07-10T00:22:29.656527752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:29.657314 containerd[1592]: time="2025-07-10T00:22:29.657235621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.684415091s" Jul 10 00:22:29.657314 containerd[1592]: time="2025-07-10T00:22:29.657270206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 00:22:29.659171 containerd[1592]: time="2025-07-10T00:22:29.659138593Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:22:29.667769 containerd[1592]: time="2025-07-10T00:22:29.667715425Z" level=info msg="Container 34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:29.680130 containerd[1592]: time="2025-07-10T00:22:29.680087191Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\"" Jul 10 00:22:29.680810 containerd[1592]: time="2025-07-10T00:22:29.680775223Z" level=info msg="StartContainer for \"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\"" Jul 10 00:22:29.682462 containerd[1592]: time="2025-07-10T00:22:29.682428456Z" level=info msg="connecting to shim 34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c" address="unix:///run/containerd/s/2ba9bbdaccc69003742ec7d3065dfe7c3c28adf9ea7aa8af64ff67a88a7fbb24" protocol=ttrpc version=3 Jul 10 00:22:29.705161 systemd[1]: Started cri-containerd-34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c.scope - libcontainer container 34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c. Jul 10 00:22:29.752149 containerd[1592]: time="2025-07-10T00:22:29.752102759Z" level=info msg="StartContainer for \"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\" returns successfully" Jul 10 00:22:30.858638 kubelet[2778]: E0710 00:22:30.858559 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:31.182529 kubelet[2778]: I0710 00:22:31.182066 2778 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:22:31.182856 systemd[1]: cri-containerd-34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c.scope: Deactivated successfully. Jul 10 00:22:31.183370 systemd[1]: cri-containerd-34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c.scope: Consumed 687ms CPU time, 178.1M memory peak, 4M read from disk, 171.2M written to disk. Jul 10 00:22:31.186517 containerd[1592]: time="2025-07-10T00:22:31.186387660Z" level=info msg="received exit event container_id:\"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\" id:\"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\" pid:3563 exited_at:{seconds:1752106951 nanos:185820225}" Jul 10 00:22:31.186517 containerd[1592]: time="2025-07-10T00:22:31.186496815Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\" id:\"34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c\" pid:3563 exited_at:{seconds:1752106951 nanos:185820225}" Jul 10 00:22:31.214551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34085f6b5f85ecc0ad73528df76e19a86b5aa89eb504939ee24336408cbc0f8c-rootfs.mount: Deactivated successfully. Jul 10 00:22:31.293682 systemd[1]: Created slice kubepods-burstable-podcf84b0f9_0fcd_430e_b845_752449dc2741.slice - libcontainer container kubepods-burstable-podcf84b0f9_0fcd_430e_b845_752449dc2741.slice. Jul 10 00:22:31.302401 systemd[1]: Created slice kubepods-besteffort-podc8c4b9fb_0b46_4a38_b818_1aad57ef6f27.slice - libcontainer container kubepods-besteffort-podc8c4b9fb_0b46_4a38_b818_1aad57ef6f27.slice. Jul 10 00:22:31.317177 systemd[1]: Created slice kubepods-burstable-podf29acab4_bf4d_4677_9779_63d312b51c4b.slice - libcontainer container kubepods-burstable-podf29acab4_bf4d_4677_9779_63d312b51c4b.slice. Jul 10 00:22:31.326896 systemd[1]: Created slice kubepods-besteffort-podb69491d4_9d8c_4fcc_b139_4b3aa1e145d5.slice - libcontainer container kubepods-besteffort-podb69491d4_9d8c_4fcc_b139_4b3aa1e145d5.slice. Jul 10 00:22:31.330736 kubelet[2778]: I0710 00:22:31.323627 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/045d1273-99fe-41bc-8bca-0658c69cb399-calico-apiserver-certs\") pod \"calico-apiserver-55d6ff7666-2gc22\" (UID: \"045d1273-99fe-41bc-8bca-0658c69cb399\") " pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" Jul 10 00:22:31.330736 kubelet[2778]: I0710 00:22:31.323679 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrwz\" (UniqueName: \"kubernetes.io/projected/045d1273-99fe-41bc-8bca-0658c69cb399-kube-api-access-cfrwz\") pod \"calico-apiserver-55d6ff7666-2gc22\" (UID: \"045d1273-99fe-41bc-8bca-0658c69cb399\") " pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" Jul 10 00:22:31.330736 kubelet[2778]: I0710 00:22:31.323702 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfg8\" (UniqueName: \"kubernetes.io/projected/7d21800c-52b7-4ce7-9d68-1834b0d30eb2-kube-api-access-6kfg8\") pod \"calico-kube-controllers-789d9b6b96-xbrbx\" (UID: \"7d21800c-52b7-4ce7-9d68-1834b0d30eb2\") " pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" Jul 10 00:22:31.330736 kubelet[2778]: I0710 00:22:31.323744 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-ca-bundle\") pod \"whisker-74577c77-j98hx\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " pod="calico-system/whisker-74577c77-j98hx" Jul 10 00:22:31.330736 kubelet[2778]: I0710 00:22:31.323769 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf84b0f9-0fcd-430e-b845-752449dc2741-config-volume\") pod \"coredns-7c65d6cfc9-4jmdk\" (UID: \"cf84b0f9-0fcd-430e-b845-752449dc2741\") " pod="kube-system/coredns-7c65d6cfc9-4jmdk" Jul 10 00:22:31.331006 kubelet[2778]: I0710 00:22:31.323795 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnjjv\" (UniqueName: \"kubernetes.io/projected/f29acab4-bf4d-4677-9779-63d312b51c4b-kube-api-access-dnjjv\") pod \"coredns-7c65d6cfc9-tz9jl\" (UID: \"f29acab4-bf4d-4677-9779-63d312b51c4b\") " pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:31.331006 kubelet[2778]: I0710 00:22:31.323816 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-backend-key-pair\") pod \"whisker-74577c77-j98hx\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " pod="calico-system/whisker-74577c77-j98hx" Jul 10 00:22:31.331006 kubelet[2778]: I0710 00:22:31.323837 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4d5\" (UniqueName: \"kubernetes.io/projected/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-kube-api-access-7v4d5\") pod \"whisker-74577c77-j98hx\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " pod="calico-system/whisker-74577c77-j98hx" Jul 10 00:22:31.331006 kubelet[2778]: I0710 00:22:31.323862 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07328e55-66df-496e-896a-830734260aeb-calico-apiserver-certs\") pod \"calico-apiserver-55d6ff7666-t5pt2\" (UID: \"07328e55-66df-496e-896a-830734260aeb\") " pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" Jul 10 00:22:31.331006 kubelet[2778]: I0710 00:22:31.323885 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttkk\" (UniqueName: \"kubernetes.io/projected/b69491d4-9d8c-4fcc-b139-4b3aa1e145d5-kube-api-access-cttkk\") pod \"goldmane-58fd7646b9-pvk8n\" (UID: \"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5\") " pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:31.332875 kubelet[2778]: I0710 00:22:31.323909 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drdf\" (UniqueName: \"kubernetes.io/projected/cf84b0f9-0fcd-430e-b845-752449dc2741-kube-api-access-7drdf\") pod \"coredns-7c65d6cfc9-4jmdk\" (UID: \"cf84b0f9-0fcd-430e-b845-752449dc2741\") " pod="kube-system/coredns-7c65d6cfc9-4jmdk" Jul 10 00:22:31.332875 kubelet[2778]: I0710 00:22:31.323955 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b69491d4-9d8c-4fcc-b139-4b3aa1e145d5-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-pvk8n\" (UID: \"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5\") " pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:31.332875 kubelet[2778]: I0710 00:22:31.323981 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f29acab4-bf4d-4677-9779-63d312b51c4b-config-volume\") pod \"coredns-7c65d6cfc9-tz9jl\" (UID: \"f29acab4-bf4d-4677-9779-63d312b51c4b\") " pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:31.332875 kubelet[2778]: I0710 00:22:31.324000 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cng8v\" (UniqueName: \"kubernetes.io/projected/07328e55-66df-496e-896a-830734260aeb-kube-api-access-cng8v\") pod \"calico-apiserver-55d6ff7666-t5pt2\" (UID: \"07328e55-66df-496e-896a-830734260aeb\") " pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" Jul 10 00:22:31.332875 kubelet[2778]: I0710 00:22:31.324062 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d21800c-52b7-4ce7-9d68-1834b0d30eb2-tigera-ca-bundle\") pod \"calico-kube-controllers-789d9b6b96-xbrbx\" (UID: \"7d21800c-52b7-4ce7-9d68-1834b0d30eb2\") " pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" Jul 10 00:22:31.333296 kubelet[2778]: I0710 00:22:31.324090 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b69491d4-9d8c-4fcc-b139-4b3aa1e145d5-config\") pod \"goldmane-58fd7646b9-pvk8n\" (UID: \"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5\") " pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:31.333296 kubelet[2778]: I0710 00:22:31.324109 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b69491d4-9d8c-4fcc-b139-4b3aa1e145d5-goldmane-key-pair\") pod \"goldmane-58fd7646b9-pvk8n\" (UID: \"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5\") " pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:31.334903 systemd[1]: Created slice kubepods-besteffort-pod7d21800c_52b7_4ce7_9d68_1834b0d30eb2.slice - libcontainer container kubepods-besteffort-pod7d21800c_52b7_4ce7_9d68_1834b0d30eb2.slice. Jul 10 00:22:31.343727 systemd[1]: Created slice kubepods-besteffort-pod07328e55_66df_496e_896a_830734260aeb.slice - libcontainer container kubepods-besteffort-pod07328e55_66df_496e_896a_830734260aeb.slice. Jul 10 00:22:31.347750 systemd[1]: Created slice kubepods-besteffort-pod045d1273_99fe_41bc_8bca_0658c69cb399.slice - libcontainer container kubepods-besteffort-pod045d1273_99fe_41bc_8bca_0658c69cb399.slice. Jul 10 00:22:32.074247 containerd[1592]: time="2025-07-10T00:22:32.074182611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jmdk,Uid:cf84b0f9-0fcd-430e-b845-752449dc2741,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.074289561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74577c77-j98hx,Uid:c8c4b9fb-0b46-4a38-b818-1aad57ef6f27,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.074486492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvk8n,Uid:b69491d4-9d8c-4fcc-b139-4b3aa1e145d5,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.074182701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789d9b6b96-xbrbx,Uid:7d21800c-52b7-4ce7-9d68-1834b0d30eb2,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.075052354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-2gc22,Uid:045d1273-99fe-41bc-8bca-0658c69cb399,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.075136261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-t5pt2,Uid:07328e55-66df-496e-896a-830734260aeb,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:22:32.075740 containerd[1592]: time="2025-07-10T00:22:32.075237200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:32.486363 containerd[1592]: time="2025-07-10T00:22:32.486290924Z" level=error msg="Failed to destroy network for sandbox \"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.496345 containerd[1592]: time="2025-07-10T00:22:32.496176349Z" level=error msg="Failed to destroy network for sandbox \"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.510204 containerd[1592]: time="2025-07-10T00:22:32.509162464Z" level=error msg="Failed to destroy network for sandbox \"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.511883 containerd[1592]: time="2025-07-10T00:22:32.511837575Z" level=error msg="Failed to destroy network for sandbox \"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.534202 containerd[1592]: time="2025-07-10T00:22:32.534123385Z" level=error msg="Failed to destroy network for sandbox \"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.534202 containerd[1592]: time="2025-07-10T00:22:32.534168059Z" level=error msg="Failed to destroy network for sandbox \"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.534559 containerd[1592]: time="2025-07-10T00:22:32.534123315Z" level=error msg="Failed to destroy network for sandbox \"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557005 containerd[1592]: time="2025-07-10T00:22:32.548478209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74577c77-j98hx,Uid:c8c4b9fb-0b46-4a38-b818-1aad57ef6f27,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557233 containerd[1592]: time="2025-07-10T00:22:32.548485272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557233 containerd[1592]: time="2025-07-10T00:22:32.548503256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-2gc22,Uid:045d1273-99fe-41bc-8bca-0658c69cb399,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557233 containerd[1592]: time="2025-07-10T00:22:32.548515228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jmdk,Uid:cf84b0f9-0fcd-430e-b845-752449dc2741,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557405 containerd[1592]: time="2025-07-10T00:22:32.548546166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789d9b6b96-xbrbx,Uid:7d21800c-52b7-4ce7-9d68-1834b0d30eb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557405 containerd[1592]: time="2025-07-10T00:22:32.550310698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvk8n,Uid:b69491d4-9d8c-4fcc-b139-4b3aa1e145d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.557405 containerd[1592]: time="2025-07-10T00:22:32.551845620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-t5pt2,Uid:07328e55-66df-496e-896a-830734260aeb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.567687 kubelet[2778]: E0710 00:22:32.567535 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.567687 kubelet[2778]: E0710 00:22:32.567577 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.567687 kubelet[2778]: E0710 00:22:32.567637 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74577c77-j98hx" Jul 10 00:22:32.567687 kubelet[2778]: E0710 00:22:32.567669 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74577c77-j98hx" Jul 10 00:22:32.568250 kubelet[2778]: E0710 00:22:32.567525 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.568250 kubelet[2778]: E0710 00:22:32.567762 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4jmdk" Jul 10 00:22:32.568250 kubelet[2778]: E0710 00:22:32.567777 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4jmdk" Jul 10 00:22:32.568250 kubelet[2778]: E0710 00:22:32.567782 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.568391 kubelet[2778]: E0710 00:22:32.567813 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4jmdk_kube-system(cf84b0f9-0fcd-430e-b845-752449dc2741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4jmdk_kube-system(cf84b0f9-0fcd-430e-b845-752449dc2741)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f57911376293232dfc9bca3f82e2b710bea19983a124dd553b902cadba216d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4jmdk" podUID="cf84b0f9-0fcd-430e-b845-752449dc2741" Jul 10 00:22:32.568391 kubelet[2778]: E0710 00:22:32.567638 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" Jul 10 00:22:32.568391 kubelet[2778]: E0710 00:22:32.567830 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" Jul 10 00:22:32.568523 kubelet[2778]: E0710 00:22:32.567844 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" Jul 10 00:22:32.568523 kubelet[2778]: E0710 00:22:32.567850 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" Jul 10 00:22:32.568523 kubelet[2778]: E0710 00:22:32.567866 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789d9b6b96-xbrbx_calico-system(7d21800c-52b7-4ce7-9d68-1834b0d30eb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789d9b6b96-xbrbx_calico-system(7d21800c-52b7-4ce7-9d68-1834b0d30eb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4326bfb07a15804fdd0fbce02ecfb39d3fd07d357c4ec36d0c4e40428432980d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" podUID="7d21800c-52b7-4ce7-9d68-1834b0d30eb2" Jul 10 00:22:32.568655 kubelet[2778]: E0710 00:22:32.567525 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.568655 kubelet[2778]: E0710 00:22:32.567889 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:32.568655 kubelet[2778]: E0710 00:22:32.567884 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d6ff7666-t5pt2_calico-apiserver(07328e55-66df-496e-896a-830734260aeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d6ff7666-t5pt2_calico-apiserver(07328e55-66df-496e-896a-830734260aeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b962ee405ef9b2daf6f837a1b687e611dfc2e23d88b61b224ec5afa5717730d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" podUID="07328e55-66df-496e-896a-830734260aeb" Jul 10 00:22:32.568768 kubelet[2778]: E0710 00:22:32.567901 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:32.568768 kubelet[2778]: E0710 00:22:32.567586 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.568768 kubelet[2778]: E0710 00:22:32.567920 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tz9jl_kube-system(f29acab4-bf4d-4677-9779-63d312b51c4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tz9jl_kube-system(f29acab4-bf4d-4677-9779-63d312b51c4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c705b27d29b7ced66c88414266ea2308e65af3b6a72f03c023f04cc6fac1efab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tz9jl" podUID="f29acab4-bf4d-4677-9779-63d312b51c4b" Jul 10 00:22:32.568882 kubelet[2778]: E0710 00:22:32.567923 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:32.568882 kubelet[2778]: E0710 00:22:32.567940 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-pvk8n" Jul 10 00:22:32.568882 kubelet[2778]: E0710 00:22:32.567543 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.568882 kubelet[2778]: E0710 00:22:32.567961 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" Jul 10 00:22:32.569065 kubelet[2778]: E0710 00:22:32.567966 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-pvk8n_calico-system(b69491d4-9d8c-4fcc-b139-4b3aa1e145d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-pvk8n_calico-system(b69491d4-9d8c-4fcc-b139-4b3aa1e145d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6421bc9c5c8789dbe25281d09e3bf17cea9317a9c0b58beca132fcef9ac4a68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-pvk8n" podUID="b69491d4-9d8c-4fcc-b139-4b3aa1e145d5" Jul 10 00:22:32.569065 kubelet[2778]: E0710 00:22:32.567727 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74577c77-j98hx_calico-system(c8c4b9fb-0b46-4a38-b818-1aad57ef6f27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74577c77-j98hx_calico-system(c8c4b9fb-0b46-4a38-b818-1aad57ef6f27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82f2241461b8e4670341e28afcfb735da81a0a4ce6bb73c1dc203f8b3eb917e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74577c77-j98hx" podUID="c8c4b9fb-0b46-4a38-b818-1aad57ef6f27" Jul 10 00:22:32.569065 kubelet[2778]: E0710 00:22:32.567974 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" Jul 10 00:22:32.569204 kubelet[2778]: E0710 00:22:32.568030 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55d6ff7666-2gc22_calico-apiserver(045d1273-99fe-41bc-8bca-0658c69cb399)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55d6ff7666-2gc22_calico-apiserver(045d1273-99fe-41bc-8bca-0658c69cb399)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1167efb79836dec08293dcac405659bcfb3b81e593d4f8206c430558212dff9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" podUID="045d1273-99fe-41bc-8bca-0658c69cb399" Jul 10 00:22:32.864591 systemd[1]: Created slice kubepods-besteffort-poda12e8b81_dd74_48fe_8fc0_6670947a4f3a.slice - libcontainer container kubepods-besteffort-poda12e8b81_dd74_48fe_8fc0_6670947a4f3a.slice. Jul 10 00:22:32.868056 containerd[1592]: time="2025-07-10T00:22:32.867960775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88rhw,Uid:a12e8b81-dd74-48fe-8fc0-6670947a4f3a,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:32.931665 containerd[1592]: time="2025-07-10T00:22:32.931605595Z" level=error msg="Failed to destroy network for sandbox \"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:32.999348 containerd[1592]: time="2025-07-10T00:22:32.999284759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:22:33.019697 containerd[1592]: time="2025-07-10T00:22:33.019603064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88rhw,Uid:a12e8b81-dd74-48fe-8fc0-6670947a4f3a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:33.019936 kubelet[2778]: E0710 00:22:33.019778 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:33.019936 kubelet[2778]: E0710 00:22:33.019821 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:33.019936 kubelet[2778]: E0710 00:22:33.019838 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-88rhw" Jul 10 00:22:33.020110 kubelet[2778]: E0710 00:22:33.019877 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-88rhw_calico-system(a12e8b81-dd74-48fe-8fc0-6670947a4f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-88rhw_calico-system(a12e8b81-dd74-48fe-8fc0-6670947a4f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0dacdd414ced72805668dd0da8df5c00627aef563d2e3a06d3c63de5393b954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-88rhw" podUID="a12e8b81-dd74-48fe-8fc0-6670947a4f3a" Jul 10 00:22:33.215377 systemd[1]: run-netns-cni\x2d450891a6\x2ddd10\x2d6a60\x2dea56\x2d70fdf918ec4f.mount: Deactivated successfully. Jul 10 00:22:33.215518 systemd[1]: run-netns-cni\x2da398b22b\x2db507\x2d1621\x2d47fb\x2d25b5c405424b.mount: Deactivated successfully. Jul 10 00:22:33.215610 systemd[1]: run-netns-cni\x2d63f1fef8\x2d57a9\x2d15f5\x2d3466\x2deb146873da9c.mount: Deactivated successfully. Jul 10 00:22:33.215702 systemd[1]: run-netns-cni\x2d356581fd\x2d63a5\x2d7f6c\x2d4c1a\x2d5aaccae6f5a3.mount: Deactivated successfully. Jul 10 00:22:33.215791 systemd[1]: run-netns-cni\x2d7b074ee1\x2ddb8f\x2d457f\x2d94df\x2d7cdb6603aa0e.mount: Deactivated successfully. Jul 10 00:22:41.866538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760465216.mount: Deactivated successfully. Jul 10 00:22:43.047900 containerd[1592]: time="2025-07-10T00:22:43.047807295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:43.070110 containerd[1592]: time="2025-07-10T00:22:43.048859880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 10 00:22:43.070110 containerd[1592]: time="2025-07-10T00:22:43.051507107Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:43.071246 containerd[1592]: time="2025-07-10T00:22:43.071201036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:43.071669 containerd[1592]: time="2025-07-10T00:22:43.071628628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.072282333s" Jul 10 00:22:43.071669 containerd[1592]: time="2025-07-10T00:22:43.071658774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 00:22:43.088438 containerd[1592]: time="2025-07-10T00:22:43.088376199Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:22:43.122277 containerd[1592]: time="2025-07-10T00:22:43.122189408Z" level=info msg="Container 690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:43.459670 containerd[1592]: time="2025-07-10T00:22:43.459527779Z" level=info msg="CreateContainer within sandbox \"c8c01be4abbbe03c866d0d71097bce617baa782c3a058a7bcc7f0554f6f8fe88\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\"" Jul 10 00:22:43.464486 containerd[1592]: time="2025-07-10T00:22:43.464458039Z" level=info msg="StartContainer for \"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\"" Jul 10 00:22:43.466064 containerd[1592]: time="2025-07-10T00:22:43.466026751Z" level=info msg="connecting to shim 690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd" address="unix:///run/containerd/s/2ba9bbdaccc69003742ec7d3065dfe7c3c28adf9ea7aa8af64ff67a88a7fbb24" protocol=ttrpc version=3 Jul 10 00:22:43.502328 systemd[1]: Started cri-containerd-690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd.scope - libcontainer container 690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd. Jul 10 00:22:43.641108 containerd[1592]: time="2025-07-10T00:22:43.640974301Z" level=info msg="StartContainer for \"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\" returns successfully" Jul 10 00:22:43.656570 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:22:43.657205 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:22:43.911138 containerd[1592]: time="2025-07-10T00:22:43.911073133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:44.056447 containerd[1592]: time="2025-07-10T00:22:44.056380181Z" level=error msg="Failed to destroy network for sandbox \"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:44.067864 containerd[1592]: time="2025-07-10T00:22:44.067646673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:44.069093 kubelet[2778]: E0710 00:22:44.069004 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:22:44.069684 kubelet[2778]: E0710 00:22:44.069135 2778 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:44.069684 kubelet[2778]: E0710 00:22:44.069165 2778 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tz9jl" Jul 10 00:22:44.069684 kubelet[2778]: E0710 00:22:44.069246 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tz9jl_kube-system(f29acab4-bf4d-4677-9779-63d312b51c4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tz9jl_kube-system(f29acab4-bf4d-4677-9779-63d312b51c4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9161bee8de2cbf0224a2e8cfa6540bed8cf09aeffd4b567cc42e399224292666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tz9jl" podUID="f29acab4-bf4d-4677-9779-63d312b51c4b" Jul 10 00:22:44.122684 kubelet[2778]: I0710 00:22:44.122569 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vkndk" podStartSLOduration=1.9134556790000001 podStartE2EDuration="26.122546445s" podCreationTimestamp="2025-07-10 00:22:18 +0000 UTC" firstStartedPulling="2025-07-10 00:22:18.863332363 +0000 UTC m=+23.117226193" lastFinishedPulling="2025-07-10 00:22:43.072423119 +0000 UTC m=+47.326316959" observedRunningTime="2025-07-10 00:22:44.116575526 +0000 UTC m=+48.370469366" watchObservedRunningTime="2025-07-10 00:22:44.122546445 +0000 UTC m=+48.376440285" Jul 10 00:22:44.232642 containerd[1592]: time="2025-07-10T00:22:44.232595731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\" id:\"9e39865ca1fbf0ce9347b1e061a4b20af0e2572d89f69ed1c9db18648cefa3d9\" pid:3960 exit_status:1 exited_at:{seconds:1752106964 nanos:232192542}" Jul 10 00:22:44.511565 kubelet[2778]: I0710 00:22:44.511378 2778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-backend-key-pair\") pod \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " Jul 10 00:22:44.512151 kubelet[2778]: I0710 00:22:44.512120 2778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-ca-bundle\") pod \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " Jul 10 00:22:44.512247 kubelet[2778]: I0710 00:22:44.512156 2778 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v4d5\" (UniqueName: \"kubernetes.io/projected/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-kube-api-access-7v4d5\") pod \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\" (UID: \"c8c4b9fb-0b46-4a38-b818-1aad57ef6f27\") " Jul 10 00:22:44.513639 kubelet[2778]: I0710 00:22:44.513572 2778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27" (UID: "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:22:44.527349 kubelet[2778]: I0710 00:22:44.525298 2778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-kube-api-access-7v4d5" (OuterVolumeSpecName: "kube-api-access-7v4d5") pod "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27" (UID: "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27"). InnerVolumeSpecName "kube-api-access-7v4d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:22:44.525450 systemd[1]: var-lib-kubelet-pods-c8c4b9fb\x2d0b46\x2d4a38\x2db818\x2d1aad57ef6f27-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7v4d5.mount: Deactivated successfully. Jul 10 00:22:44.535858 systemd[1]: var-lib-kubelet-pods-c8c4b9fb\x2d0b46\x2d4a38\x2db818\x2d1aad57ef6f27-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:22:44.540183 kubelet[2778]: I0710 00:22:44.538267 2778 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27" (UID: "c8c4b9fb-0b46-4a38-b818-1aad57ef6f27"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:22:44.613487 kubelet[2778]: I0710 00:22:44.613436 2778 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:22:44.613487 kubelet[2778]: I0710 00:22:44.613470 2778 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:22:44.613487 kubelet[2778]: I0710 00:22:44.613480 2778 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v4d5\" (UniqueName: \"kubernetes.io/projected/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27-kube-api-access-7v4d5\") on node \"localhost\" DevicePath \"\"" Jul 10 00:22:44.859136 containerd[1592]: time="2025-07-10T00:22:44.858976587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88rhw,Uid:a12e8b81-dd74-48fe-8fc0-6670947a4f3a,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:44.859136 containerd[1592]: time="2025-07-10T00:22:44.859098914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789d9b6b96-xbrbx,Uid:7d21800c-52b7-4ce7-9d68-1834b0d30eb2,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:44.859324 containerd[1592]: time="2025-07-10T00:22:44.859276858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-2gc22,Uid:045d1273-99fe-41bc-8bca-0658c69cb399,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:22:45.046120 systemd[1]: Removed slice kubepods-besteffort-podc8c4b9fb_0b46_4a38_b818_1aad57ef6f27.slice - libcontainer container kubepods-besteffort-podc8c4b9fb_0b46_4a38_b818_1aad57ef6f27.slice. Jul 10 00:22:45.184693 containerd[1592]: time="2025-07-10T00:22:45.184555400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\" id:\"3ff0f14677ec18ea03a48860831a03ccfd2520f515aa988af6adac0d42b81193\" pid:4020 exit_status:1 exited_at:{seconds:1752106965 nanos:184233338}" Jul 10 00:22:45.233581 systemd[1]: Created slice kubepods-besteffort-pod8c700af8_d06f_4874_89d1_1e7ed2d81493.slice - libcontainer container kubepods-besteffort-pod8c700af8_d06f_4874_89d1_1e7ed2d81493.slice. Jul 10 00:22:45.320348 kubelet[2778]: I0710 00:22:45.320229 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8c700af8-d06f-4874-89d1-1e7ed2d81493-whisker-backend-key-pair\") pod \"whisker-76cd4f54bc-mb5k5\" (UID: \"8c700af8-d06f-4874-89d1-1e7ed2d81493\") " pod="calico-system/whisker-76cd4f54bc-mb5k5" Jul 10 00:22:45.322317 kubelet[2778]: I0710 00:22:45.320664 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c700af8-d06f-4874-89d1-1e7ed2d81493-whisker-ca-bundle\") pod \"whisker-76cd4f54bc-mb5k5\" (UID: \"8c700af8-d06f-4874-89d1-1e7ed2d81493\") " pod="calico-system/whisker-76cd4f54bc-mb5k5" Jul 10 00:22:45.322317 kubelet[2778]: I0710 00:22:45.320837 2778 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrtb\" (UniqueName: \"kubernetes.io/projected/8c700af8-d06f-4874-89d1-1e7ed2d81493-kube-api-access-rlrtb\") pod \"whisker-76cd4f54bc-mb5k5\" (UID: \"8c700af8-d06f-4874-89d1-1e7ed2d81493\") " pod="calico-system/whisker-76cd4f54bc-mb5k5" Jul 10 00:22:45.325275 systemd-networkd[1493]: calicaeb231748c: Link UP Jul 10 00:22:45.326212 systemd-networkd[1493]: calicaeb231748c: Gained carrier Jul 10 00:22:45.355410 containerd[1592]: 2025-07-10 00:22:45.032 [INFO][3995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:22:45.355410 containerd[1592]: 2025-07-10 00:22:45.071 [INFO][3995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--88rhw-eth0 csi-node-driver- calico-system a12e8b81-dd74-48fe-8fc0-6670947a4f3a 675 0 2025-07-10 00:22:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-88rhw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicaeb231748c [] [] }} ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-" Jul 10 00:22:45.355410 containerd[1592]: 2025-07-10 00:22:45.071 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.355410 containerd[1592]: 2025-07-10 00:22:45.263 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" HandleID="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Workload="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.266 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" HandleID="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Workload="localhost-k8s-csi--node--driver--88rhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-88rhw", "timestamp":"2025-07-10 00:22:45.263082576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.266 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.266 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.266 [INFO][4046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.275 [INFO][4046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" host="localhost" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.280 [INFO][4046] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.284 [INFO][4046] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.289 [INFO][4046] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.290 [INFO][4046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:45.355881 containerd[1592]: 2025-07-10 00:22:45.290 [INFO][4046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" host="localhost" Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.291 [INFO][4046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5 Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.296 [INFO][4046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" host="localhost" Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" host="localhost" Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" host="localhost" Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:45.356203 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" HandleID="k8s-pod-network.ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Workload="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.356368 containerd[1592]: 2025-07-10 00:22:45.306 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88rhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a12e8b81-dd74-48fe-8fc0-6670947a4f3a", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-88rhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicaeb231748c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:45.356441 containerd[1592]: 2025-07-10 00:22:45.307 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.356441 containerd[1592]: 2025-07-10 00:22:45.307 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaeb231748c ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.356441 containerd[1592]: 2025-07-10 00:22:45.325 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.356536 containerd[1592]: 2025-07-10 00:22:45.329 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--88rhw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a12e8b81-dd74-48fe-8fc0-6670947a4f3a", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5", Pod:"csi-node-driver-88rhw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicaeb231748c", MAC:"6a:25:d9:28:95:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:45.356612 containerd[1592]: 2025-07-10 00:22:45.346 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" Namespace="calico-system" Pod="csi-node-driver-88rhw" WorkloadEndpoint="localhost-k8s-csi--node--driver--88rhw-eth0" Jul 10 00:22:45.695112 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:47990.service - OpenSSH per-connection server daemon (10.0.0.1:47990). Jul 10 00:22:45.736486 systemd-networkd[1493]: calib9349f8781d: Link UP Jul 10 00:22:45.737350 systemd-networkd[1493]: calib9349f8781d: Gained carrier Jul 10 00:22:45.846231 containerd[1592]: time="2025-07-10T00:22:45.846060954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cd4f54bc-mb5k5,Uid:8c700af8-d06f-4874-89d1-1e7ed2d81493,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:45.853997 containerd[1592]: 2025-07-10 00:22:45.120 [INFO][4033] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:22:45.853997 containerd[1592]: 2025-07-10 00:22:45.154 [INFO][4033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0 calico-apiserver-55d6ff7666- calico-apiserver 045d1273-99fe-41bc-8bca-0658c69cb399 805 0 2025-07-10 00:22:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d6ff7666 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55d6ff7666-2gc22 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9349f8781d [] [] }} ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-" Jul 10 00:22:45.853997 containerd[1592]: 2025-07-10 00:22:45.155 [INFO][4033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.853997 containerd[1592]: 2025-07-10 00:22:45.268 [INFO][4068] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" HandleID="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Workload="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.268 [INFO][4068] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" HandleID="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Workload="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f6b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55d6ff7666-2gc22", "timestamp":"2025-07-10 00:22:45.268534623 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.268 [INFO][4068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.303 [INFO][4068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.375 [INFO][4068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" host="localhost" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.665 [INFO][4068] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.673 [INFO][4068] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.676 [INFO][4068] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.678 [INFO][4068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:45.854284 containerd[1592]: 2025-07-10 00:22:45.678 [INFO][4068] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" host="localhost" Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.680 [INFO][4068] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.700 [INFO][4068] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" host="localhost" Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.727 [INFO][4068] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" host="localhost" Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.728 [INFO][4068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" host="localhost" Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.728 [INFO][4068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:45.854523 containerd[1592]: 2025-07-10 00:22:45.728 [INFO][4068] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" HandleID="k8s-pod-network.fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Workload="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.854686 containerd[1592]: 2025-07-10 00:22:45.732 [INFO][4033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0", GenerateName:"calico-apiserver-55d6ff7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"045d1273-99fe-41bc-8bca-0658c69cb399", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d6ff7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55d6ff7666-2gc22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9349f8781d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:45.854760 containerd[1592]: 2025-07-10 00:22:45.732 [INFO][4033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.854760 containerd[1592]: 2025-07-10 00:22:45.732 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9349f8781d ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.854760 containerd[1592]: 2025-07-10 00:22:45.737 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.854844 containerd[1592]: 2025-07-10 00:22:45.738 [INFO][4033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0", GenerateName:"calico-apiserver-55d6ff7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"045d1273-99fe-41bc-8bca-0658c69cb399", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d6ff7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c", Pod:"calico-apiserver-55d6ff7666-2gc22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9349f8781d", MAC:"26:90:72:aa:cd:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:45.854910 containerd[1592]: 2025-07-10 00:22:45.843 [INFO][4033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-2gc22" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--2gc22-eth0" Jul 10 00:22:45.870796 kubelet[2778]: I0710 00:22:45.870720 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c4b9fb-0b46-4a38-b818-1aad57ef6f27" path="/var/lib/kubelet/pods/c8c4b9fb-0b46-4a38-b818-1aad57ef6f27/volumes" Jul 10 00:22:45.953341 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 47990 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:22:45.957684 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:45.981978 systemd-logind[1573]: New session 8 of user core. Jul 10 00:22:45.990177 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:22:46.094261 systemd-networkd[1493]: cali8908ada053f: Link UP Jul 10 00:22:46.095261 systemd-networkd[1493]: cali8908ada053f: Gained carrier Jul 10 00:22:46.119240 containerd[1592]: 2025-07-10 00:22:45.213 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:22:46.119240 containerd[1592]: 2025-07-10 00:22:45.246 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0 calico-kube-controllers-789d9b6b96- calico-system 7d21800c-52b7-4ce7-9d68-1834b0d30eb2 808 0 2025-07-10 00:22:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:789d9b6b96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-789d9b6b96-xbrbx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8908ada053f [] [] }} ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-" Jul 10 00:22:46.119240 containerd[1592]: 2025-07-10 00:22:45.247 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.119240 containerd[1592]: 2025-07-10 00:22:45.284 [INFO][4077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" HandleID="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Workload="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.284 [INFO][4077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" HandleID="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Workload="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-789d9b6b96-xbrbx", "timestamp":"2025-07-10 00:22:45.284617837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.284 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.728 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.728 [INFO][4077] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.842 [INFO][4077] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" host="localhost" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.850 [INFO][4077] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.861 [INFO][4077] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.867 [INFO][4077] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.874 [INFO][4077] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:46.119668 containerd[1592]: 2025-07-10 00:22:45.874 [INFO][4077] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" host="localhost" Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:45.878 [INFO][4077] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1 Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:46.061 [INFO][4077] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" host="localhost" Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:46.084 [INFO][4077] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" host="localhost" Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:46.084 [INFO][4077] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" host="localhost" Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:46.084 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:46.119986 containerd[1592]: 2025-07-10 00:22:46.084 [INFO][4077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" HandleID="k8s-pod-network.8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Workload="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.120241 containerd[1592]: 2025-07-10 00:22:46.089 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0", GenerateName:"calico-kube-controllers-789d9b6b96-", Namespace:"calico-system", SelfLink:"", UID:"7d21800c-52b7-4ce7-9d68-1834b0d30eb2", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789d9b6b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-789d9b6b96-xbrbx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8908ada053f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:46.120319 containerd[1592]: 2025-07-10 00:22:46.089 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.120319 containerd[1592]: 2025-07-10 00:22:46.089 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8908ada053f ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.120319 containerd[1592]: 2025-07-10 00:22:46.095 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.120422 containerd[1592]: 2025-07-10 00:22:46.096 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0", GenerateName:"calico-kube-controllers-789d9b6b96-", Namespace:"calico-system", SelfLink:"", UID:"7d21800c-52b7-4ce7-9d68-1834b0d30eb2", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789d9b6b96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1", Pod:"calico-kube-controllers-789d9b6b96-xbrbx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8908ada053f", MAC:"6e:f5:16:af:ac:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:46.120496 containerd[1592]: 2025-07-10 00:22:46.113 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" Namespace="calico-system" Pod="calico-kube-controllers-789d9b6b96-xbrbx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--789d9b6b96--xbrbx-eth0" Jul 10 00:22:46.355572 sshd[4226]: Connection closed by 10.0.0.1 port 47990 Jul 10 00:22:46.356230 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:46.360620 systemd-networkd[1493]: calicaeb231748c: Gained IPv6LL Jul 10 00:22:46.361183 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:22:46.361742 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:47990.service: Deactivated successfully. Jul 10 00:22:46.364921 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:22:46.369023 systemd-logind[1573]: Removed session 8. Jul 10 00:22:46.568341 containerd[1592]: time="2025-07-10T00:22:46.568271921Z" level=info msg="connecting to shim fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c" address="unix:///run/containerd/s/bb96ce77091dbe4b8f9acedfda5dfa56f00db399f1e9256ca4c94806451260ff" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:46.570179 containerd[1592]: time="2025-07-10T00:22:46.570115478Z" level=info msg="connecting to shim 8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1" address="unix:///run/containerd/s/2b0ff685cdc2dd670f5ca4675bb465ac6cea8160230c1f326786436a73c41912" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:46.572156 systemd-networkd[1493]: calia4d6d8f8955: Link UP Jul 10 00:22:46.575101 systemd-networkd[1493]: calia4d6d8f8955: Gained carrier Jul 10 00:22:46.593297 containerd[1592]: time="2025-07-10T00:22:46.593251425Z" level=info msg="connecting to shim ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5" address="unix:///run/containerd/s/5b5e4dcf9a8179b7ccab315d93d0d7394b57d79d11901a3d4bdd073904b23143" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:46.619963 containerd[1592]: 2025-07-10 00:22:46.412 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0 whisker-76cd4f54bc- calico-system 8c700af8-d06f-4874-89d1-1e7ed2d81493 893 0 2025-07-10 00:22:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76cd4f54bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76cd4f54bc-mb5k5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4d6d8f8955 [] [] }} ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-" Jul 10 00:22:46.619963 containerd[1592]: 2025-07-10 00:22:46.413 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.619963 containerd[1592]: 2025-07-10 00:22:46.460 [INFO][4276] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" HandleID="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Workload="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.461 [INFO][4276] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" HandleID="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Workload="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021f640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76cd4f54bc-mb5k5", "timestamp":"2025-07-10 00:22:46.460071546 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.461 [INFO][4276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.461 [INFO][4276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.461 [INFO][4276] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.475 [INFO][4276] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" host="localhost" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.483 [INFO][4276] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.489 [INFO][4276] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.492 [INFO][4276] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.497 [INFO][4276] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:46.620344 containerd[1592]: 2025-07-10 00:22:46.497 [INFO][4276] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" host="localhost" Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.500 [INFO][4276] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28 Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.511 [INFO][4276] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" host="localhost" Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.526 [INFO][4276] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" host="localhost" Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.526 [INFO][4276] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" host="localhost" Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.526 [INFO][4276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:46.620641 containerd[1592]: 2025-07-10 00:22:46.527 [INFO][4276] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" HandleID="k8s-pod-network.39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Workload="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.620787 containerd[1592]: 2025-07-10 00:22:46.537 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0", GenerateName:"whisker-76cd4f54bc-", Namespace:"calico-system", SelfLink:"", UID:"8c700af8-d06f-4874-89d1-1e7ed2d81493", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cd4f54bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76cd4f54bc-mb5k5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4d6d8f8955", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:46.620787 containerd[1592]: 2025-07-10 00:22:46.537 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.620870 containerd[1592]: 2025-07-10 00:22:46.537 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4d6d8f8955 ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.620870 containerd[1592]: 2025-07-10 00:22:46.574 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.620918 containerd[1592]: 2025-07-10 00:22:46.576 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0", GenerateName:"whisker-76cd4f54bc-", Namespace:"calico-system", SelfLink:"", UID:"8c700af8-d06f-4874-89d1-1e7ed2d81493", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76cd4f54bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28", Pod:"whisker-76cd4f54bc-mb5k5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4d6d8f8955", MAC:"6a:b3:dd:a6:64:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:46.620979 containerd[1592]: 2025-07-10 00:22:46.605 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" Namespace="calico-system" Pod="whisker-76cd4f54bc-mb5k5" WorkloadEndpoint="localhost-k8s-whisker--76cd4f54bc--mb5k5-eth0" Jul 10 00:22:46.623383 systemd[1]: Started cri-containerd-fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c.scope - libcontainer container fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c. Jul 10 00:22:46.628834 systemd[1]: Started cri-containerd-8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1.scope - libcontainer container 8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1. Jul 10 00:22:46.647511 systemd[1]: Started cri-containerd-ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5.scope - libcontainer container ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5. Jul 10 00:22:46.673305 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:46.681165 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:46.691484 containerd[1592]: time="2025-07-10T00:22:46.691209169Z" level=info msg="connecting to shim 39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28" address="unix:///run/containerd/s/40bf94580479b6668ed4752a4ecb90fe03e0d00e5cdd75a3f0bece10478c7f16" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:46.697679 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:46.734658 systemd[1]: Started cri-containerd-39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28.scope - libcontainer container 39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28. Jul 10 00:22:46.740785 containerd[1592]: time="2025-07-10T00:22:46.740701657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-88rhw,Uid:a12e8b81-dd74-48fe-8fc0-6670947a4f3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5\"" Jul 10 00:22:46.761639 containerd[1592]: time="2025-07-10T00:22:46.760867673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:22:46.791288 containerd[1592]: time="2025-07-10T00:22:46.791174445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-2gc22,Uid:045d1273-99fe-41bc-8bca-0658c69cb399,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c\"" Jul 10 00:22:46.795386 containerd[1592]: time="2025-07-10T00:22:46.795162450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789d9b6b96-xbrbx,Uid:7d21800c-52b7-4ce7-9d68-1834b0d30eb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1\"" Jul 10 00:22:46.807500 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:46.858870 containerd[1592]: time="2025-07-10T00:22:46.858311586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76cd4f54bc-mb5k5,Uid:8c700af8-d06f-4874-89d1-1e7ed2d81493,Namespace:calico-system,Attempt:0,} returns sandbox id \"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28\"" Jul 10 00:22:46.858870 containerd[1592]: time="2025-07-10T00:22:46.858871136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvk8n,Uid:b69491d4-9d8c-4fcc-b139-4b3aa1e145d5,Namespace:calico-system,Attempt:0,}" Jul 10 00:22:46.864830 containerd[1592]: time="2025-07-10T00:22:46.864698990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-t5pt2,Uid:07328e55-66df-496e-896a-830734260aeb,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:22:46.905701 systemd-networkd[1493]: vxlan.calico: Link UP Jul 10 00:22:46.905715 systemd-networkd[1493]: vxlan.calico: Gained carrier Jul 10 00:22:47.083697 systemd-networkd[1493]: cali9bc72413086: Link UP Jul 10 00:22:47.084849 systemd-networkd[1493]: cali9bc72413086: Gained carrier Jul 10 00:22:47.162163 containerd[1592]: 2025-07-10 00:22:46.950 [INFO][4486] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0 goldmane-58fd7646b9- calico-system b69491d4-9d8c-4fcc-b139-4b3aa1e145d5 809 0 2025-07-10 00:22:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-pvk8n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9bc72413086 [] [] }} ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-" Jul 10 00:22:47.162163 containerd[1592]: 2025-07-10 00:22:46.951 [INFO][4486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.162163 containerd[1592]: 2025-07-10 00:22:47.009 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" HandleID="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Workload="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.009 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" HandleID="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Workload="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-pvk8n", "timestamp":"2025-07-10 00:22:47.009421505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.009 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.009 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.009 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.023 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" host="localhost" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.030 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.036 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.038 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.041 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.162681 containerd[1592]: 2025-07-10 00:22:47.041 [INFO][4534] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" host="localhost" Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.043 [INFO][4534] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4 Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.052 [INFO][4534] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" host="localhost" Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.073 [INFO][4534] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" host="localhost" Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.074 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" host="localhost" Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.074 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:47.162906 containerd[1592]: 2025-07-10 00:22:47.074 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" HandleID="k8s-pod-network.4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Workload="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.163061 containerd[1592]: 2025-07-10 00:22:47.079 [INFO][4486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-pvk8n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bc72413086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.163061 containerd[1592]: 2025-07-10 00:22:47.079 [INFO][4486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.163149 containerd[1592]: 2025-07-10 00:22:47.079 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bc72413086 ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.163149 containerd[1592]: 2025-07-10 00:22:47.088 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.163209 containerd[1592]: 2025-07-10 00:22:47.090 [INFO][4486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b69491d4-9d8c-4fcc-b139-4b3aa1e145d5", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4", Pod:"goldmane-58fd7646b9-pvk8n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9bc72413086", MAC:"12:99:25:ee:98:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.163292 containerd[1592]: 2025-07-10 00:22:47.152 [INFO][4486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" Namespace="calico-system" Pod="goldmane-58fd7646b9-pvk8n" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--pvk8n-eth0" Jul 10 00:22:47.191643 systemd-networkd[1493]: cali3629b2caff5: Link UP Jul 10 00:22:47.191876 systemd-networkd[1493]: cali3629b2caff5: Gained carrier Jul 10 00:22:47.216165 containerd[1592]: 2025-07-10 00:22:46.953 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0 calico-apiserver-55d6ff7666- calico-apiserver 07328e55-66df-496e-896a-830734260aeb 806 0 2025-07-10 00:22:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55d6ff7666 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55d6ff7666-t5pt2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3629b2caff5 [] [] }} ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-" Jul 10 00:22:47.216165 containerd[1592]: 2025-07-10 00:22:46.953 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.216165 containerd[1592]: 2025-07-10 00:22:47.017 [INFO][4537] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" HandleID="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Workload="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.017 [INFO][4537] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" HandleID="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Workload="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55d6ff7666-t5pt2", "timestamp":"2025-07-10 00:22:47.017123704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.017 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.074 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.074 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.124 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" host="localhost" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.156 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.163 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.165 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.168 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.216525 containerd[1592]: 2025-07-10 00:22:47.168 [INFO][4537] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" host="localhost" Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.169 [INFO][4537] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.174 [INFO][4537] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" host="localhost" Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.184 [INFO][4537] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" host="localhost" Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.184 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" host="localhost" Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.184 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:47.216840 containerd[1592]: 2025-07-10 00:22:47.184 [INFO][4537] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" HandleID="k8s-pod-network.420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Workload="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.217041 containerd[1592]: 2025-07-10 00:22:47.187 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0", GenerateName:"calico-apiserver-55d6ff7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"07328e55-66df-496e-896a-830734260aeb", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d6ff7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55d6ff7666-t5pt2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3629b2caff5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.217130 containerd[1592]: 2025-07-10 00:22:47.187 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.217130 containerd[1592]: 2025-07-10 00:22:47.187 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3629b2caff5 ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.217130 containerd[1592]: 2025-07-10 00:22:47.191 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.217228 containerd[1592]: 2025-07-10 00:22:47.194 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0", GenerateName:"calico-apiserver-55d6ff7666-", Namespace:"calico-apiserver", SelfLink:"", UID:"07328e55-66df-496e-896a-830734260aeb", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55d6ff7666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae", Pod:"calico-apiserver-55d6ff7666-t5pt2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3629b2caff5", MAC:"3a:4a:f7:a5:c5:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.217318 containerd[1592]: 2025-07-10 00:22:47.207 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" Namespace="calico-apiserver" Pod="calico-apiserver-55d6ff7666-t5pt2" WorkloadEndpoint="localhost-k8s-calico--apiserver--55d6ff7666--t5pt2-eth0" Jul 10 00:22:47.227744 containerd[1592]: time="2025-07-10T00:22:47.227677461Z" level=info msg="connecting to shim 4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4" address="unix:///run/containerd/s/8e842335713ccf777828a275bedb13bb39dc2ba5694420b5bfee3b764456ecac" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:47.257979 systemd-networkd[1493]: cali8908ada053f: Gained IPv6LL Jul 10 00:22:47.265337 systemd[1]: Started cri-containerd-4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4.scope - libcontainer container 4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4. Jul 10 00:22:47.283130 containerd[1592]: time="2025-07-10T00:22:47.283021275Z" level=info msg="connecting to shim 420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae" address="unix:///run/containerd/s/c9f6fa5cfe41a9b56b63df79ff10658ce2347d68ae9d6c0a17d34f38df62feea" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:47.293509 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:47.328316 systemd[1]: Started cri-containerd-420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae.scope - libcontainer container 420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae. Jul 10 00:22:47.347772 containerd[1592]: time="2025-07-10T00:22:47.347626494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-pvk8n,Uid:b69491d4-9d8c-4fcc-b139-4b3aa1e145d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4\"" Jul 10 00:22:47.353289 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:47.385280 systemd-networkd[1493]: calib9349f8781d: Gained IPv6LL Jul 10 00:22:47.409904 containerd[1592]: time="2025-07-10T00:22:47.409675405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55d6ff7666-t5pt2,Uid:07328e55-66df-496e-896a-830734260aeb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae\"" Jul 10 00:22:47.858761 containerd[1592]: time="2025-07-10T00:22:47.858700127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jmdk,Uid:cf84b0f9-0fcd-430e-b845-752449dc2741,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:47.962814 systemd-networkd[1493]: cali96709789df6: Link UP Jul 10 00:22:47.963133 systemd-networkd[1493]: cali96709789df6: Gained carrier Jul 10 00:22:47.988981 containerd[1592]: 2025-07-10 00:22:47.896 [INFO][4704] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0 coredns-7c65d6cfc9- kube-system cf84b0f9-0fcd-430e-b845-752449dc2741 798 0 2025-07-10 00:22:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4jmdk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96709789df6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-" Jul 10 00:22:47.988981 containerd[1592]: 2025-07-10 00:22:47.896 [INFO][4704] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.988981 containerd[1592]: 2025-07-10 00:22:47.923 [INFO][4718] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" HandleID="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Workload="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.924 [INFO][4718] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" HandleID="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Workload="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7070), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4jmdk", "timestamp":"2025-07-10 00:22:47.923951993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.924 [INFO][4718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.924 [INFO][4718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.924 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.931 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" host="localhost" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.935 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.940 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.941 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.943 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:47.989594 containerd[1592]: 2025-07-10 00:22:47.943 [INFO][4718] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" host="localhost" Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.945 [INFO][4718] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.948 [INFO][4718] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" host="localhost" Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.955 [INFO][4718] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" host="localhost" Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.956 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" host="localhost" Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.956 [INFO][4718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:47.989808 containerd[1592]: 2025-07-10 00:22:47.956 [INFO][4718] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" HandleID="k8s-pod-network.81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Workload="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.989949 containerd[1592]: 2025-07-10 00:22:47.959 [INFO][4704] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf84b0f9-0fcd-430e-b845-752449dc2741", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4jmdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96709789df6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.990478 containerd[1592]: 2025-07-10 00:22:47.960 [INFO][4704] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.990478 containerd[1592]: 2025-07-10 00:22:47.960 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96709789df6 ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.990478 containerd[1592]: 2025-07-10 00:22:47.963 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:47.990552 containerd[1592]: 2025-07-10 00:22:47.963 [INFO][4704] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cf84b0f9-0fcd-430e-b845-752449dc2741", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec", Pod:"coredns-7c65d6cfc9-4jmdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96709789df6", MAC:"3e:31:46:cb:81:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:47.990552 containerd[1592]: 2025-07-10 00:22:47.986 [INFO][4704] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4jmdk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4jmdk-eth0" Jul 10 00:22:48.025254 systemd-networkd[1493]: calia4d6d8f8955: Gained IPv6LL Jul 10 00:22:48.088400 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Jul 10 00:22:48.341766 containerd[1592]: time="2025-07-10T00:22:48.341682032Z" level=info msg="connecting to shim 81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec" address="unix:///run/containerd/s/8a2ea839634c1196deac4328a0185ce083356c14c7376939d72febb0ceef799a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:48.373181 systemd[1]: Started cri-containerd-81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec.scope - libcontainer container 81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec. Jul 10 00:22:48.392697 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:48.472319 systemd-networkd[1493]: cali3629b2caff5: Gained IPv6LL Jul 10 00:22:48.691339 containerd[1592]: time="2025-07-10T00:22:48.691190529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4jmdk,Uid:cf84b0f9-0fcd-430e-b845-752449dc2741,Namespace:kube-system,Attempt:0,} returns sandbox id \"81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec\"" Jul 10 00:22:48.698578 containerd[1592]: time="2025-07-10T00:22:48.698522543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:48.700139 containerd[1592]: time="2025-07-10T00:22:48.700038834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 10 00:22:48.701729 containerd[1592]: time="2025-07-10T00:22:48.701672351Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:48.704114 containerd[1592]: time="2025-07-10T00:22:48.704084537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:48.705237 containerd[1592]: time="2025-07-10T00:22:48.704741644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.943818262s" Jul 10 00:22:48.705237 containerd[1592]: time="2025-07-10T00:22:48.704791419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 00:22:48.705936 containerd[1592]: time="2025-07-10T00:22:48.705908150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:22:48.706550 containerd[1592]: time="2025-07-10T00:22:48.706513697Z" level=info msg="CreateContainer within sandbox \"81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:48.706858 containerd[1592]: time="2025-07-10T00:22:48.706826820Z" level=info msg="CreateContainer within sandbox \"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:22:48.729039 containerd[1592]: time="2025-07-10T00:22:48.728973183Z" level=info msg="Container a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:48.729924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587810480.mount: Deactivated successfully. Jul 10 00:22:48.732758 containerd[1592]: time="2025-07-10T00:22:48.732708088Z" level=info msg="Container 79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:48.739536 containerd[1592]: time="2025-07-10T00:22:48.739489462Z" level=info msg="CreateContainer within sandbox \"81703af5f61d8367fee33ca20084018be8afe26ca9d716db07c911ffa83a1cec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d\"" Jul 10 00:22:48.740282 containerd[1592]: time="2025-07-10T00:22:48.740136528Z" level=info msg="StartContainer for \"a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d\"" Jul 10 00:22:48.741437 containerd[1592]: time="2025-07-10T00:22:48.741393700Z" level=info msg="connecting to shim a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d" address="unix:///run/containerd/s/8a2ea839634c1196deac4328a0185ce083356c14c7376939d72febb0ceef799a" protocol=ttrpc version=3 Jul 10 00:22:48.756683 containerd[1592]: time="2025-07-10T00:22:48.756620263Z" level=info msg="CreateContainer within sandbox \"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364\"" Jul 10 00:22:48.757300 containerd[1592]: time="2025-07-10T00:22:48.757244886Z" level=info msg="StartContainer for \"79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364\"" Jul 10 00:22:48.759123 containerd[1592]: time="2025-07-10T00:22:48.759062988Z" level=info msg="connecting to shim 79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364" address="unix:///run/containerd/s/5b5e4dcf9a8179b7ccab315d93d0d7394b57d79d11901a3d4bdd073904b23143" protocol=ttrpc version=3 Jul 10 00:22:48.766295 systemd[1]: Started cri-containerd-a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d.scope - libcontainer container a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d. Jul 10 00:22:48.782200 systemd[1]: Started cri-containerd-79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364.scope - libcontainer container 79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364. Jul 10 00:22:48.839278 containerd[1592]: time="2025-07-10T00:22:48.839218020Z" level=info msg="StartContainer for \"a124ad08bddeb0cce1cbec434c164e4bfe351c25660d76d1c1be9c24c6b4f11d\" returns successfully" Jul 10 00:22:48.852902 containerd[1592]: time="2025-07-10T00:22:48.852825473Z" level=info msg="StartContainer for \"79cbfe0cc038e1c64bb62faa012b51a6983463de02c79fc5f09eb1d6497a6364\" returns successfully" Jul 10 00:22:48.984255 systemd-networkd[1493]: cali9bc72413086: Gained IPv6LL Jul 10 00:22:49.099265 kubelet[2778]: I0710 00:22:49.098938 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4jmdk" podStartSLOduration=47.098915636 podStartE2EDuration="47.098915636s" podCreationTimestamp="2025-07-10 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:49.098425112 +0000 UTC m=+53.352318952" watchObservedRunningTime="2025-07-10 00:22:49.098915636 +0000 UTC m=+53.352809476" Jul 10 00:22:49.496290 systemd-networkd[1493]: cali96709789df6: Gained IPv6LL Jul 10 00:22:51.376700 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:48822.service - OpenSSH per-connection server daemon (10.0.0.1:48822). Jul 10 00:22:51.510912 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 48822 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:22:51.513257 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:51.519541 systemd-logind[1573]: New session 9 of user core. Jul 10 00:22:51.530242 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:22:51.671089 sshd[4861]: Connection closed by 10.0.0.1 port 48822 Jul 10 00:22:51.674834 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:51.681135 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:48822.service: Deactivated successfully. Jul 10 00:22:51.683557 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:22:51.684596 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:22:51.685974 systemd-logind[1573]: Removed session 9. Jul 10 00:22:53.102680 containerd[1592]: time="2025-07-10T00:22:53.102601295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:53.104247 containerd[1592]: time="2025-07-10T00:22:53.104131382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 10 00:22:53.106541 containerd[1592]: time="2025-07-10T00:22:53.106429265Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:53.109151 containerd[1592]: time="2025-07-10T00:22:53.109093719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:53.109926 containerd[1592]: time="2025-07-10T00:22:53.109881062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.403934176s" Jul 10 00:22:53.109926 containerd[1592]: time="2025-07-10T00:22:53.109920567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:22:53.111031 containerd[1592]: time="2025-07-10T00:22:53.110982997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:22:53.112442 containerd[1592]: time="2025-07-10T00:22:53.112406870Z" level=info msg="CreateContainer within sandbox \"fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:22:53.125454 containerd[1592]: time="2025-07-10T00:22:53.123644044Z" level=info msg="Container 05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:53.271536 containerd[1592]: time="2025-07-10T00:22:53.271471089Z" level=info msg="CreateContainer within sandbox \"fb3b5d0dbab50b96d9d555edecfab2a38ad7b7cd11a3ec62859f00c647b2c21c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e\"" Jul 10 00:22:53.272146 containerd[1592]: time="2025-07-10T00:22:53.272107621Z" level=info msg="StartContainer for \"05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e\"" Jul 10 00:22:53.273341 containerd[1592]: time="2025-07-10T00:22:53.273303387Z" level=info msg="connecting to shim 05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e" address="unix:///run/containerd/s/bb96ce77091dbe4b8f9acedfda5dfa56f00db399f1e9256ca4c94806451260ff" protocol=ttrpc version=3 Jul 10 00:22:53.304215 systemd[1]: Started cri-containerd-05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e.scope - libcontainer container 05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e. Jul 10 00:22:53.372251 containerd[1592]: time="2025-07-10T00:22:53.371984629Z" level=info msg="StartContainer for \"05196aad6663d3b662e4785f2c5e641b32235f51ab1143e474a228cbda129a7e\" returns successfully" Jul 10 00:22:54.158689 kubelet[2778]: I0710 00:22:54.158596 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d6ff7666-2gc22" podStartSLOduration=32.841166694 podStartE2EDuration="39.158531131s" podCreationTimestamp="2025-07-10 00:22:15 +0000 UTC" firstStartedPulling="2025-07-10 00:22:46.79343622 +0000 UTC m=+51.047330060" lastFinishedPulling="2025-07-10 00:22:53.110800647 +0000 UTC m=+57.364694497" observedRunningTime="2025-07-10 00:22:54.157331258 +0000 UTC m=+58.411225109" watchObservedRunningTime="2025-07-10 00:22:54.158531131 +0000 UTC m=+58.412424971" Jul 10 00:22:55.084194 kubelet[2778]: I0710 00:22:55.084152 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:22:55.882650 containerd[1592]: time="2025-07-10T00:22:55.882575993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:55.884922 containerd[1592]: time="2025-07-10T00:22:55.884864300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 10 00:22:55.886605 containerd[1592]: time="2025-07-10T00:22:55.886548720Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:55.889363 containerd[1592]: time="2025-07-10T00:22:55.889307480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:55.889900 containerd[1592]: time="2025-07-10T00:22:55.889837636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 2.778822628s" Jul 10 00:22:55.889900 containerd[1592]: time="2025-07-10T00:22:55.889888915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 00:22:55.891532 containerd[1592]: time="2025-07-10T00:22:55.891486348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:22:55.906341 containerd[1592]: time="2025-07-10T00:22:55.905949742Z" level=info msg="CreateContainer within sandbox \"8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:22:55.918193 containerd[1592]: time="2025-07-10T00:22:55.918145688Z" level=info msg="Container f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:55.931457 containerd[1592]: time="2025-07-10T00:22:55.931408239Z" level=info msg="CreateContainer within sandbox \"8ecc5365fc95bcbddbe33e7bccedc739358f0ba4b5ba483c2af80c39bbd5beb1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\"" Jul 10 00:22:55.933429 containerd[1592]: time="2025-07-10T00:22:55.931848713Z" level=info msg="StartContainer for \"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\"" Jul 10 00:22:55.933429 containerd[1592]: time="2025-07-10T00:22:55.932994890Z" level=info msg="connecting to shim f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875" address="unix:///run/containerd/s/2b0ff685cdc2dd670f5ca4675bb465ac6cea8160230c1f326786436a73c41912" protocol=ttrpc version=3 Jul 10 00:22:55.986287 systemd[1]: Started cri-containerd-f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875.scope - libcontainer container f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875. Jul 10 00:22:56.043153 containerd[1592]: time="2025-07-10T00:22:56.043087206Z" level=info msg="StartContainer for \"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\" returns successfully" Jul 10 00:22:56.110053 kubelet[2778]: I0710 00:22:56.109921 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-789d9b6b96-xbrbx" podStartSLOduration=29.015822314 podStartE2EDuration="38.109854718s" podCreationTimestamp="2025-07-10 00:22:18 +0000 UTC" firstStartedPulling="2025-07-10 00:22:46.796817433 +0000 UTC m=+51.050711263" lastFinishedPulling="2025-07-10 00:22:55.890849827 +0000 UTC m=+60.144743667" observedRunningTime="2025-07-10 00:22:56.10876036 +0000 UTC m=+60.362654200" watchObservedRunningTime="2025-07-10 00:22:56.109854718 +0000 UTC m=+60.363748558" Jul 10 00:22:56.685594 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:48830.service - OpenSSH per-connection server daemon (10.0.0.1:48830). Jul 10 00:22:56.751607 sshd[4981]: Accepted publickey for core from 10.0.0.1 port 48830 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:22:56.753472 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:22:56.758964 systemd-logind[1573]: New session 10 of user core. Jul 10 00:22:56.774183 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:22:56.907214 sshd[4983]: Connection closed by 10.0.0.1 port 48830 Jul 10 00:22:56.907570 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Jul 10 00:22:56.911812 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:48830.service: Deactivated successfully. Jul 10 00:22:56.913913 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:22:56.914792 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:22:56.916764 systemd-logind[1573]: Removed session 10. Jul 10 00:22:57.132248 containerd[1592]: time="2025-07-10T00:22:57.132197625Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\" id:\"b2a14ece52929b91fe9ecd401b6784c9f3fedceed77c2b0ceb324ef9035618c5\" pid:5013 exited_at:{seconds:1752106977 nanos:131884656}" Jul 10 00:22:57.858974 containerd[1592]: time="2025-07-10T00:22:57.858919893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,}" Jul 10 00:22:58.033573 systemd-networkd[1493]: cali282154e9931: Link UP Jul 10 00:22:58.034449 systemd-networkd[1493]: cali282154e9931: Gained carrier Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.938 [INFO][5029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0 coredns-7c65d6cfc9- kube-system f29acab4-bf4d-4677-9779-63d312b51c4b 802 0 2025-07-10 00:22:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-tz9jl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali282154e9931 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.939 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.964 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" HandleID="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Workload="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.964 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" HandleID="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Workload="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000369740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-tz9jl", "timestamp":"2025-07-10 00:22:57.964270571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.964 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.964 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.964 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.971 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.976 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.979 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.981 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.983 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.983 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.984 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734 Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:57.992 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:58.026 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:58.026 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" host="localhost" Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:58.026 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:22:58.084526 containerd[1592]: 2025-07-10 00:22:58.026 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" HandleID="k8s-pod-network.0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Workload="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.029 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f29acab4-bf4d-4677-9779-63d312b51c4b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-tz9jl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282154e9931", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.030 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.030 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali282154e9931 ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.034 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.035 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f29acab4-bf4d-4677-9779-63d312b51c4b", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 22, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734", Pod:"coredns-7c65d6cfc9-tz9jl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali282154e9931", MAC:"e2:2a:8f:f9:01:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:22:58.085367 containerd[1592]: 2025-07-10 00:22:58.080 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tz9jl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--tz9jl-eth0" Jul 10 00:22:58.165600 containerd[1592]: time="2025-07-10T00:22:58.165420190Z" level=info msg="connecting to shim 0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734" address="unix:///run/containerd/s/eb498872e34bb11ae5b3b1421669374e8c5ad749b475da1eb8558ec8451e9f7a" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:22:58.199166 systemd[1]: Started cri-containerd-0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734.scope - libcontainer container 0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734. Jul 10 00:22:58.215386 containerd[1592]: time="2025-07-10T00:22:58.215325552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:58.216381 containerd[1592]: time="2025-07-10T00:22:58.216319984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 10 00:22:58.216692 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:22:58.218507 containerd[1592]: time="2025-07-10T00:22:58.218456123Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:58.221108 containerd[1592]: time="2025-07-10T00:22:58.221080889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:22:58.221704 containerd[1592]: time="2025-07-10T00:22:58.221681538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.330145795s" Jul 10 00:22:58.221782 containerd[1592]: time="2025-07-10T00:22:58.221768966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 00:22:58.224529 containerd[1592]: time="2025-07-10T00:22:58.224502618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:22:58.224735 containerd[1592]: time="2025-07-10T00:22:58.224715225Z" level=info msg="CreateContainer within sandbox \"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:22:58.236384 containerd[1592]: time="2025-07-10T00:22:58.236339577Z" level=info msg="Container 6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:58.253173 containerd[1592]: time="2025-07-10T00:22:58.252767644Z" level=info msg="CreateContainer within sandbox \"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800\"" Jul 10 00:22:58.256098 containerd[1592]: time="2025-07-10T00:22:58.256034328Z" level=info msg="StartContainer for \"6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800\"" Jul 10 00:22:58.260698 containerd[1592]: time="2025-07-10T00:22:58.260592904Z" level=info msg="connecting to shim 6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800" address="unix:///run/containerd/s/40bf94580479b6668ed4752a4ecb90fe03e0d00e5cdd75a3f0bece10478c7f16" protocol=ttrpc version=3 Jul 10 00:22:58.272604 containerd[1592]: time="2025-07-10T00:22:58.272544432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tz9jl,Uid:f29acab4-bf4d-4677-9779-63d312b51c4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734\"" Jul 10 00:22:58.277158 containerd[1592]: time="2025-07-10T00:22:58.277120492Z" level=info msg="CreateContainer within sandbox \"0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:22:58.293168 systemd[1]: Started cri-containerd-6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800.scope - libcontainer container 6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800. Jul 10 00:22:58.497672 containerd[1592]: time="2025-07-10T00:22:58.497630653Z" level=info msg="StartContainer for \"6f99589ac6c93dd29b0ab98d64648fb42ccf50dd0e31bbd03702ff84dcdd2800\" returns successfully" Jul 10 00:22:58.649514 containerd[1592]: time="2025-07-10T00:22:58.649453960Z" level=info msg="Container d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:22:58.739950 containerd[1592]: time="2025-07-10T00:22:58.739867223Z" level=info msg="CreateContainer within sandbox \"0a3d0d941576c39e67bf3463e75d2e7999b9cd7c353b69c070ffb0c81e516734\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134\"" Jul 10 00:22:58.745599 containerd[1592]: time="2025-07-10T00:22:58.745504605Z" level=info msg="StartContainer for \"d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134\"" Jul 10 00:22:58.746646 containerd[1592]: time="2025-07-10T00:22:58.746580645Z" level=info msg="connecting to shim d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134" address="unix:///run/containerd/s/eb498872e34bb11ae5b3b1421669374e8c5ad749b475da1eb8558ec8451e9f7a" protocol=ttrpc version=3 Jul 10 00:22:58.780371 systemd[1]: Started cri-containerd-d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134.scope - libcontainer container d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134. Jul 10 00:22:58.816867 containerd[1592]: time="2025-07-10T00:22:58.816812696Z" level=info msg="StartContainer for \"d89e53aab95db08d19db96418b67b293739501bc48ace32efafd48c43349e134\" returns successfully" Jul 10 00:22:59.123158 kubelet[2778]: I0710 00:22:59.122877 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tz9jl" podStartSLOduration=57.122848811 podStartE2EDuration="57.122848811s" podCreationTimestamp="2025-07-10 00:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:22:59.109666821 +0000 UTC m=+63.363560661" watchObservedRunningTime="2025-07-10 00:22:59.122848811 +0000 UTC m=+63.376742651" Jul 10 00:22:59.288281 systemd-networkd[1493]: cali282154e9931: Gained IPv6LL Jul 10 00:23:00.510947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922019192.mount: Deactivated successfully. Jul 10 00:23:01.502469 containerd[1592]: time="2025-07-10T00:23:01.502412698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:01.503539 containerd[1592]: time="2025-07-10T00:23:01.503483363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 10 00:23:01.504993 containerd[1592]: time="2025-07-10T00:23:01.504959665Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:01.507867 containerd[1592]: time="2025-07-10T00:23:01.507797410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:01.508646 containerd[1592]: time="2025-07-10T00:23:01.508595254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.284062438s" Jul 10 00:23:01.508646 containerd[1592]: time="2025-07-10T00:23:01.508648075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 00:23:01.509846 containerd[1592]: time="2025-07-10T00:23:01.509813592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:23:01.511038 containerd[1592]: time="2025-07-10T00:23:01.510914246Z" level=info msg="CreateContainer within sandbox \"4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:23:01.536416 containerd[1592]: time="2025-07-10T00:23:01.536346544Z" level=info msg="Container 933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:01.547850 containerd[1592]: time="2025-07-10T00:23:01.547786129Z" level=info msg="CreateContainer within sandbox \"4aea71ec8c2282cd64b3f483fb9fe549955d13379936d8ed55dd5f24be93aef4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\"" Jul 10 00:23:01.548496 containerd[1592]: time="2025-07-10T00:23:01.548444758Z" level=info msg="StartContainer for \"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\"" Jul 10 00:23:01.549701 containerd[1592]: time="2025-07-10T00:23:01.549666653Z" level=info msg="connecting to shim 933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295" address="unix:///run/containerd/s/8e842335713ccf777828a275bedb13bb39dc2ba5694420b5bfee3b764456ecac" protocol=ttrpc version=3 Jul 10 00:23:01.591266 systemd[1]: Started cri-containerd-933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295.scope - libcontainer container 933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295. Jul 10 00:23:01.658767 containerd[1592]: time="2025-07-10T00:23:01.658687194Z" level=info msg="StartContainer for \"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\" returns successfully" Jul 10 00:23:01.684828 containerd[1592]: time="2025-07-10T00:23:01.684769605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\" id:\"7226b226ec3d5d14a44c1fe9a8a9a6a10fc13578539252904e12f419a2331191\" pid:5226 exited_at:{seconds:1752106981 nanos:684397433}" Jul 10 00:23:01.929882 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:55120.service - OpenSSH per-connection server daemon (10.0.0.1:55120). Jul 10 00:23:02.021132 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 55120 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:02.111400 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:02.117158 systemd-logind[1573]: New session 11 of user core. Jul 10 00:23:02.123235 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:23:02.207538 containerd[1592]: time="2025-07-10T00:23:02.207313467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\" id:\"1cfafb6ad4e572b60aacd07b0514195646f247bb291df87bf74ec15e9f0ffef3\" pid:5278 exit_status:1 exited_at:{seconds:1752106982 nanos:206781130}" Jul 10 00:23:02.288188 containerd[1592]: time="2025-07-10T00:23:02.288124565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\" id:\"72ab211a76feb5725bcf917b145cd379966fa67c30c1d50ea9234c5c49983bba\" pid:5305 exited_at:{seconds:1752106982 nanos:287666711}" Jul 10 00:23:02.298291 sshd[5279]: Connection closed by 10.0.0.1 port 55120 Jul 10 00:23:02.300048 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:02.310561 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:55120.service: Deactivated successfully. Jul 10 00:23:02.313474 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:23:02.315396 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:23:02.321282 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:55124.service - OpenSSH per-connection server daemon (10.0.0.1:55124). Jul 10 00:23:02.322157 systemd-logind[1573]: Removed session 11. Jul 10 00:23:02.378136 containerd[1592]: time="2025-07-10T00:23:02.378073300Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:02.395301 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 55124 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:02.397487 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:02.403042 systemd-logind[1573]: New session 12 of user core. Jul 10 00:23:02.413197 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:23:02.424976 containerd[1592]: time="2025-07-10T00:23:02.424897020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:23:02.427205 containerd[1592]: time="2025-07-10T00:23:02.427158161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 917.308279ms" Jul 10 00:23:02.427236 containerd[1592]: time="2025-07-10T00:23:02.427203357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 00:23:02.428117 containerd[1592]: time="2025-07-10T00:23:02.428079511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:23:02.429594 containerd[1592]: time="2025-07-10T00:23:02.429534460Z" level=info msg="CreateContainer within sandbox \"420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:23:02.725818 sshd[5333]: Connection closed by 10.0.0.1 port 55124 Jul 10 00:23:02.727241 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:02.737394 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:55124.service: Deactivated successfully. Jul 10 00:23:02.740094 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:23:02.741354 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:23:02.746701 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:55136.service - OpenSSH per-connection server daemon (10.0.0.1:55136). Jul 10 00:23:02.748161 systemd-logind[1573]: Removed session 12. Jul 10 00:23:02.806315 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 55136 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:02.808489 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:02.813963 systemd-logind[1573]: New session 13 of user core. Jul 10 00:23:02.825192 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:23:02.887079 containerd[1592]: time="2025-07-10T00:23:02.886396463Z" level=info msg="Container c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:03.071177 sshd[5348]: Connection closed by 10.0.0.1 port 55136 Jul 10 00:23:03.073346 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:03.077823 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:55136.service: Deactivated successfully. Jul 10 00:23:03.080158 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:23:03.081022 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:23:03.083270 systemd-logind[1573]: Removed session 13. Jul 10 00:23:03.105707 kubelet[2778]: I0710 00:23:03.105661 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:23:03.194953 containerd[1592]: time="2025-07-10T00:23:03.194896438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\" id:\"35485a6ddc564e640e1d11a26a2af4dd576ae155b7fe370fea4e7d7a0dfaba23\" pid:5373 exit_status:1 exited_at:{seconds:1752106983 nanos:194574323}" Jul 10 00:23:03.345841 containerd[1592]: time="2025-07-10T00:23:03.345659515Z" level=info msg="CreateContainer within sandbox \"420e860ac743b8412957668b5282d8dc1878be7857d9a526e5885d975112ceae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d\"" Jul 10 00:23:03.346630 containerd[1592]: time="2025-07-10T00:23:03.346580113Z" level=info msg="StartContainer for \"c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d\"" Jul 10 00:23:03.347986 containerd[1592]: time="2025-07-10T00:23:03.347951622Z" level=info msg="connecting to shim c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d" address="unix:///run/containerd/s/c9f6fa5cfe41a9b56b63df79ff10658ce2347d68ae9d6c0a17d34f38df62feea" protocol=ttrpc version=3 Jul 10 00:23:03.378178 systemd[1]: Started cri-containerd-c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d.scope - libcontainer container c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d. Jul 10 00:23:03.549320 containerd[1592]: time="2025-07-10T00:23:03.549268530Z" level=info msg="StartContainer for \"c12efce179828ef1507698ffaa85da5bf64ff538540d5f71421a90da0b962c3d\" returns successfully" Jul 10 00:23:03.714333 kubelet[2778]: I0710 00:23:03.712865 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-pvk8n" podStartSLOduration=32.554112082 podStartE2EDuration="46.712842551s" podCreationTimestamp="2025-07-10 00:22:17 +0000 UTC" firstStartedPulling="2025-07-10 00:22:47.350797247 +0000 UTC m=+51.604691087" lastFinishedPulling="2025-07-10 00:23:01.509527706 +0000 UTC m=+65.763421556" observedRunningTime="2025-07-10 00:23:02.182498881 +0000 UTC m=+66.436392731" watchObservedRunningTime="2025-07-10 00:23:03.712842551 +0000 UTC m=+67.966736391" Jul 10 00:23:05.118805 kubelet[2778]: I0710 00:23:05.118722 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:23:06.929509 containerd[1592]: time="2025-07-10T00:23:06.929418889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:06.931580 containerd[1592]: time="2025-07-10T00:23:06.931522390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 10 00:23:06.933927 containerd[1592]: time="2025-07-10T00:23:06.933862033Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:06.939935 containerd[1592]: time="2025-07-10T00:23:06.939801017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:06.940717 containerd[1592]: time="2025-07-10T00:23:06.940672409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.512565626s" Jul 10 00:23:06.940717 containerd[1592]: time="2025-07-10T00:23:06.940711554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 00:23:06.942840 containerd[1592]: time="2025-07-10T00:23:06.942614964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:23:06.944157 containerd[1592]: time="2025-07-10T00:23:06.944113110Z" level=info msg="CreateContainer within sandbox \"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:23:06.961051 containerd[1592]: time="2025-07-10T00:23:06.959275394Z" level=info msg="Container 435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:06.985327 containerd[1592]: time="2025-07-10T00:23:06.985230359Z" level=info msg="CreateContainer within sandbox \"ecebe68e8a6ddaf4f2ba2c67f16c4b4ad26c80ca5d4def8ee9ac75bdb337b7c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a\"" Jul 10 00:23:06.985997 containerd[1592]: time="2025-07-10T00:23:06.985946945Z" level=info msg="StartContainer for \"435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a\"" Jul 10 00:23:06.988906 containerd[1592]: time="2025-07-10T00:23:06.988852075Z" level=info msg="connecting to shim 435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a" address="unix:///run/containerd/s/5b5e4dcf9a8179b7ccab315d93d0d7394b57d79d11901a3d4bdd073904b23143" protocol=ttrpc version=3 Jul 10 00:23:07.011309 systemd[1]: Started cri-containerd-435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a.scope - libcontainer container 435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a. Jul 10 00:23:07.067647 containerd[1592]: time="2025-07-10T00:23:07.067598849Z" level=info msg="StartContainer for \"435570630bd354bac67541e8431a98c617cf5e150f9573df6a9da927f7001d6a\" returns successfully" Jul 10 00:23:07.138341 kubelet[2778]: I0710 00:23:07.138232 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55d6ff7666-t5pt2" podStartSLOduration=37.121896717 podStartE2EDuration="52.138198543s" podCreationTimestamp="2025-07-10 00:22:15 +0000 UTC" firstStartedPulling="2025-07-10 00:22:47.411623531 +0000 UTC m=+51.665517371" lastFinishedPulling="2025-07-10 00:23:02.427925346 +0000 UTC m=+66.681819197" observedRunningTime="2025-07-10 00:23:04.238832666 +0000 UTC m=+68.492726507" watchObservedRunningTime="2025-07-10 00:23:07.138198543 +0000 UTC m=+71.392092383" Jul 10 00:23:07.943653 kubelet[2778]: I0710 00:23:07.943610 2778 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:23:07.943653 kubelet[2778]: I0710 00:23:07.943662 2778 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:23:08.088495 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). Jul 10 00:23:08.169526 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:08.172515 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:08.179450 systemd-logind[1573]: New session 14 of user core. Jul 10 00:23:08.187262 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:23:08.364728 sshd[5480]: Connection closed by 10.0.0.1 port 55142 Jul 10 00:23:08.365162 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:08.371393 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:55142.service: Deactivated successfully. Jul 10 00:23:08.373567 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:23:08.374502 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:23:08.376371 systemd-logind[1573]: Removed session 14. Jul 10 00:23:11.067923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284345425.mount: Deactivated successfully. Jul 10 00:23:11.736691 containerd[1592]: time="2025-07-10T00:23:11.736622085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:11.737699 containerd[1592]: time="2025-07-10T00:23:11.737658056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 10 00:23:11.738923 containerd[1592]: time="2025-07-10T00:23:11.738867869Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:11.741078 containerd[1592]: time="2025-07-10T00:23:11.741037809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:23:11.741653 containerd[1592]: time="2025-07-10T00:23:11.741609067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.798958415s" Jul 10 00:23:11.741653 containerd[1592]: time="2025-07-10T00:23:11.741642841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 00:23:11.743861 containerd[1592]: time="2025-07-10T00:23:11.743819453Z" level=info msg="CreateContainer within sandbox \"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:23:11.750792 containerd[1592]: time="2025-07-10T00:23:11.750736438Z" level=info msg="Container 3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:23:11.761297 containerd[1592]: time="2025-07-10T00:23:11.761241201Z" level=info msg="CreateContainer within sandbox \"39c784068b4cb10ccbc2b7504d000fc479b8c654f27fff432bec13a503bf4e28\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e\"" Jul 10 00:23:11.762938 containerd[1592]: time="2025-07-10T00:23:11.761762904Z" level=info msg="StartContainer for \"3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e\"" Jul 10 00:23:11.762938 containerd[1592]: time="2025-07-10T00:23:11.762865362Z" level=info msg="connecting to shim 3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e" address="unix:///run/containerd/s/40bf94580479b6668ed4752a4ecb90fe03e0d00e5cdd75a3f0bece10478c7f16" protocol=ttrpc version=3 Jul 10 00:23:11.796265 systemd[1]: Started cri-containerd-3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e.scope - libcontainer container 3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e. Jul 10 00:23:12.463037 containerd[1592]: time="2025-07-10T00:23:12.462943229Z" level=info msg="StartContainer for \"3b2d15404bab866637506470c166269cfb31f208012c68df2737e84230a8d19e\" returns successfully" Jul 10 00:23:13.379246 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:50596.service - OpenSSH per-connection server daemon (10.0.0.1:50596). Jul 10 00:23:13.460377 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 50596 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:13.462192 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:13.473599 systemd-logind[1573]: New session 15 of user core. Jul 10 00:23:13.480308 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:23:13.485040 kubelet[2778]: I0710 00:23:13.484658 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76cd4f54bc-mb5k5" podStartSLOduration=3.60252311 podStartE2EDuration="28.48462832s" podCreationTimestamp="2025-07-10 00:22:45 +0000 UTC" firstStartedPulling="2025-07-10 00:22:46.860258783 +0000 UTC m=+51.114152623" lastFinishedPulling="2025-07-10 00:23:11.742363993 +0000 UTC m=+75.996257833" observedRunningTime="2025-07-10 00:23:13.484382002 +0000 UTC m=+77.738275872" watchObservedRunningTime="2025-07-10 00:23:13.48462832 +0000 UTC m=+77.738522160" Jul 10 00:23:13.486571 kubelet[2778]: I0710 00:23:13.486501 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-88rhw" podStartSLOduration=35.304487545 podStartE2EDuration="55.486488088s" podCreationTimestamp="2025-07-10 00:22:18 +0000 UTC" firstStartedPulling="2025-07-10 00:22:46.760461159 +0000 UTC m=+51.014354999" lastFinishedPulling="2025-07-10 00:23:06.942461692 +0000 UTC m=+71.196355542" observedRunningTime="2025-07-10 00:23:07.138096067 +0000 UTC m=+71.391989908" watchObservedRunningTime="2025-07-10 00:23:13.486488088 +0000 UTC m=+77.740381928" Jul 10 00:23:13.680630 sshd[5538]: Connection closed by 10.0.0.1 port 50596 Jul 10 00:23:13.680938 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:13.684698 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:50596.service: Deactivated successfully. Jul 10 00:23:13.687365 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:23:13.690068 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:23:13.691571 systemd-logind[1573]: Removed session 15. Jul 10 00:23:14.742551 containerd[1592]: time="2025-07-10T00:23:14.742497143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\" id:\"7d73e3fe5ccf479214b35ffa6e37f30e811d3946f02915172ac1c716418eecd9\" pid:5566 exited_at:{seconds:1752106994 nanos:742065663}" Jul 10 00:23:18.701066 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Jul 10 00:23:18.769097 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:18.771449 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:18.777821 systemd-logind[1573]: New session 16 of user core. Jul 10 00:23:18.785360 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:23:18.964487 sshd[5587]: Connection closed by 10.0.0.1 port 50606 Jul 10 00:23:18.966595 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:18.972447 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:23:18.973677 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:50606.service: Deactivated successfully. Jul 10 00:23:18.978763 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:23:18.984834 systemd-logind[1573]: Removed session 16. Jul 10 00:23:23.992545 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:42422.service - OpenSSH per-connection server daemon (10.0.0.1:42422). Jul 10 00:23:24.078050 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 42422 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:24.080344 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:24.086641 systemd-logind[1573]: New session 17 of user core. Jul 10 00:23:24.098320 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:23:24.302844 sshd[5603]: Connection closed by 10.0.0.1 port 42422 Jul 10 00:23:24.303173 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:24.307520 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:42422.service: Deactivated successfully. Jul 10 00:23:24.310064 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:23:24.312968 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:23:24.314207 systemd-logind[1573]: Removed session 17. Jul 10 00:23:29.328890 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Jul 10 00:23:29.441946 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:29.444043 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:29.449304 systemd-logind[1573]: New session 18 of user core. Jul 10 00:23:29.464300 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:23:29.705332 sshd[5624]: Connection closed by 10.0.0.1 port 42438 Jul 10 00:23:29.705727 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:29.716168 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:42438.service: Deactivated successfully. Jul 10 00:23:29.718187 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:23:29.719192 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:23:29.723277 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:48152.service - OpenSSH per-connection server daemon (10.0.0.1:48152). Jul 10 00:23:29.723983 systemd-logind[1573]: Removed session 18. Jul 10 00:23:29.785207 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 48152 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:29.787135 sshd-session[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:29.792595 systemd-logind[1573]: New session 19 of user core. Jul 10 00:23:29.804239 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:23:30.860455 sshd[5639]: Connection closed by 10.0.0.1 port 48152 Jul 10 00:23:30.863861 sshd-session[5637]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:30.878689 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:48168.service - OpenSSH per-connection server daemon (10.0.0.1:48168). Jul 10 00:23:30.880974 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:48152.service: Deactivated successfully. Jul 10 00:23:30.886962 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:23:30.890217 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:23:30.896334 systemd-logind[1573]: Removed session 19. Jul 10 00:23:30.975485 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 48168 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:30.977600 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:30.984154 systemd-logind[1573]: New session 20 of user core. Jul 10 00:23:31.004388 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:23:31.352068 kubelet[2778]: I0710 00:23:31.351785 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:23:31.998000 containerd[1592]: time="2025-07-10T00:23:31.997950653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"690753c84b64d36fa69fb38084cf2b2ef4069fc2887d68630735d70bd78449bd\" id:\"b0a41aef89ea71fb2117f5cffa03f2f2578487fa0758d6bc4c80ca1999484e02\" pid:5679 exited_at:{seconds:1752107011 nanos:997504749}" Jul 10 00:23:32.152600 containerd[1592]: time="2025-07-10T00:23:32.152531299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\" id:\"1a06e8e5539af39854c57cc47afd1bb437dc2252167b248a4014ea02bdb21b87\" pid:5714 exited_at:{seconds:1752107012 nanos:149856606}" Jul 10 00:23:32.209301 containerd[1592]: time="2025-07-10T00:23:32.209217071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"933d22474b64410ca11b5388fd497d02f3b43d5d51d3540749cce97ff1ec8295\" id:\"41c395a2def73a91353b2fb093ad7155ef158959a971c4fc67e021b1258ab27e\" pid:5716 exited_at:{seconds:1752107012 nanos:208715301}" Jul 10 00:23:33.202901 sshd[5652]: Connection closed by 10.0.0.1 port 48168 Jul 10 00:23:33.204147 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:33.217514 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:48168.service: Deactivated successfully. Jul 10 00:23:33.220534 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:23:33.221625 systemd[1]: session-20.scope: Consumed 711ms CPU time, 74M memory peak. Jul 10 00:23:33.223942 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:23:33.231914 systemd-logind[1573]: Removed session 20. Jul 10 00:23:33.232705 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:48182.service - OpenSSH per-connection server daemon (10.0.0.1:48182). Jul 10 00:23:33.313726 sshd[5746]: Accepted publickey for core from 10.0.0.1 port 48182 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:33.315739 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:33.322073 systemd-logind[1573]: New session 21 of user core. Jul 10 00:23:33.329310 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:23:34.039600 sshd[5749]: Connection closed by 10.0.0.1 port 48182 Jul 10 00:23:34.040148 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:34.052819 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:48182.service: Deactivated successfully. Jul 10 00:23:34.055444 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:23:34.056610 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:23:34.061398 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:48192.service - OpenSSH per-connection server daemon (10.0.0.1:48192). Jul 10 00:23:34.062607 systemd-logind[1573]: Removed session 21. Jul 10 00:23:34.124055 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 48192 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:34.126179 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:34.142221 systemd-logind[1573]: New session 22 of user core. Jul 10 00:23:34.147142 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:23:34.301159 sshd[5762]: Connection closed by 10.0.0.1 port 48192 Jul 10 00:23:34.301059 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:34.306785 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:48192.service: Deactivated successfully. Jul 10 00:23:34.310389 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:23:34.312293 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:23:34.314265 systemd-logind[1573]: Removed session 22. Jul 10 00:23:39.317759 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:48204.service - OpenSSH per-connection server daemon (10.0.0.1:48204). Jul 10 00:23:39.379309 sshd[5780]: Accepted publickey for core from 10.0.0.1 port 48204 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:39.381339 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:39.386954 systemd-logind[1573]: New session 23 of user core. Jul 10 00:23:39.395210 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:23:39.524775 sshd[5782]: Connection closed by 10.0.0.1 port 48204 Jul 10 00:23:39.525209 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:39.530987 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:48204.service: Deactivated successfully. Jul 10 00:23:39.534261 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:23:39.535697 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:23:39.537982 systemd-logind[1573]: Removed session 23. Jul 10 00:23:44.541144 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:33954.service - OpenSSH per-connection server daemon (10.0.0.1:33954). Jul 10 00:23:44.617343 sshd[5798]: Accepted publickey for core from 10.0.0.1 port 33954 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:44.619855 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:44.627113 systemd-logind[1573]: New session 24 of user core. Jul 10 00:23:44.632305 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:23:44.820063 sshd[5800]: Connection closed by 10.0.0.1 port 33954 Jul 10 00:23:44.821518 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:44.830051 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:23:44.831892 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:33954.service: Deactivated successfully. Jul 10 00:23:44.836039 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:23:44.843478 systemd-logind[1573]: Removed session 24. Jul 10 00:23:49.838788 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:59546.service - OpenSSH per-connection server daemon (10.0.0.1:59546). Jul 10 00:23:49.913364 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 59546 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:49.917631 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:49.932438 systemd-logind[1573]: New session 25 of user core. Jul 10 00:23:49.937303 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:23:50.059767 sshd[5817]: Connection closed by 10.0.0.1 port 59546 Jul 10 00:23:50.060164 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:50.064526 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:59546.service: Deactivated successfully. Jul 10 00:23:50.067145 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:23:50.069340 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:23:50.072837 systemd-logind[1573]: Removed session 25. Jul 10 00:23:52.790792 containerd[1592]: time="2025-07-10T00:23:52.790718411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f958cd7e34ed2204458d4b33de3ff3f67651b8059e055ef7ebb92fea93850875\" id:\"5d357dfaa8d4a85ba67db682ccc18a78a827b8ec1959dacbe83fb997a58d8bc3\" pid:5842 exited_at:{seconds:1752107032 nanos:790129811}" Jul 10 00:23:55.077782 systemd[1]: Started sshd@26-10.0.0.84:22-10.0.0.1:59562.service - OpenSSH per-connection server daemon (10.0.0.1:59562). Jul 10 00:23:55.142660 sshd[5853]: Accepted publickey for core from 10.0.0.1 port 59562 ssh2: RSA SHA256:a/WzkVKs173+YSebQY64/4LigDpieaPOYRH6W2gWTe4 Jul 10 00:23:55.144776 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:23:55.151049 systemd-logind[1573]: New session 26 of user core. Jul 10 00:23:55.162344 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:23:55.305030 sshd[5855]: Connection closed by 10.0.0.1 port 59562 Jul 10 00:23:55.305439 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Jul 10 00:23:55.309609 systemd[1]: sshd@26-10.0.0.84:22-10.0.0.1:59562.service: Deactivated successfully. Jul 10 00:23:55.311836 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:23:55.312745 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:23:55.314644 systemd-logind[1573]: Removed session 26.