Jul 7 06:00:27.852790 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:00:27.852814 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:00:27.852826 kernel: BIOS-provided physical RAM map: Jul 7 06:00:27.852833 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 06:00:27.852839 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 06:00:27.852846 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 06:00:27.852853 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 06:00:27.852860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 7 06:00:27.852870 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 06:00:27.852876 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 06:00:27.852883 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 06:00:27.852892 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 06:00:27.852898 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 06:00:27.852905 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 06:00:27.852913 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 06:00:27.852920 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 06:00:27.852932 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:00:27.852939 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:00:27.852946 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:00:27.852953 kernel: NX (Execute Disable) protection: active Jul 7 06:00:27.852960 kernel: APIC: Static calls initialized Jul 7 06:00:27.852967 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Jul 7 06:00:27.852974 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Jul 7 06:00:27.852980 kernel: extended physical RAM map: Jul 7 06:00:27.852987 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 06:00:27.852994 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 06:00:27.853001 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 06:00:27.853011 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 06:00:27.853018 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Jul 7 06:00:27.853025 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Jul 7 06:00:27.853031 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Jul 7 06:00:27.853038 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Jul 7 06:00:27.853045 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Jul 7 06:00:27.853052 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 06:00:27.853059 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 06:00:27.853066 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 06:00:27.853073 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 06:00:27.853080 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 06:00:27.853089 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 06:00:27.853096 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 06:00:27.853106 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 06:00:27.853114 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:00:27.853121 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:00:27.853128 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:00:27.853138 kernel: efi: EFI v2.7 by EDK II Jul 7 06:00:27.853145 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 7 06:00:27.853164 kernel: random: crng init done Jul 7 06:00:27.853172 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 7 06:00:27.853179 kernel: secureboot: Secure boot enabled Jul 7 06:00:27.853186 kernel: SMBIOS 2.8 present. Jul 7 06:00:27.853202 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 06:00:27.853218 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:00:27.853226 kernel: Hypervisor detected: KVM Jul 7 06:00:27.853233 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:00:27.853248 kernel: kvm-clock: using sched offset of 6738425744 cycles Jul 7 06:00:27.853260 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:00:27.853267 kernel: tsc: Detected 2794.748 MHz processor Jul 7 06:00:27.853284 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:00:27.853300 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:00:27.853317 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 7 06:00:27.853325 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:00:27.853336 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:00:27.853344 kernel: Using GB pages for direct mapping Jul 7 06:00:27.853353 kernel: ACPI: Early table checksum verification disabled Jul 7 06:00:27.853364 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 7 06:00:27.853372 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:00:27.853380 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853387 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853394 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 7 06:00:27.853402 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853410 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853417 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853424 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:00:27.853434 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 06:00:27.853442 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 7 06:00:27.853449 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 7 06:00:27.853457 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 7 06:00:27.853464 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 7 06:00:27.853471 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 7 06:00:27.853479 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 7 06:00:27.853486 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 7 06:00:27.853493 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 7 06:00:27.853518 kernel: No NUMA configuration found Jul 7 06:00:27.853526 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 7 06:00:27.853533 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 7 06:00:27.853540 kernel: Zone ranges: Jul 7 06:00:27.853548 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:00:27.853555 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 7 06:00:27.853563 kernel: Normal empty Jul 7 06:00:27.853570 kernel: Device empty Jul 7 06:00:27.853577 kernel: Movable zone start for each node Jul 7 06:00:27.853588 kernel: Early memory node ranges Jul 7 06:00:27.853595 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 7 06:00:27.853602 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 7 06:00:27.853610 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 7 06:00:27.853617 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 7 06:00:27.853624 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 7 06:00:27.853632 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 7 06:00:27.853639 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:00:27.853646 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 7 06:00:27.853656 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:00:27.853664 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 06:00:27.853671 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 06:00:27.853679 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 7 06:00:27.853686 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:00:27.853693 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:00:27.853701 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:00:27.853708 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:00:27.853715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:00:27.853728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:00:27.853736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:00:27.853743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:00:27.853750 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:00:27.853758 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:00:27.853765 kernel: TSC deadline timer available Jul 7 06:00:27.853772 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:00:27.853780 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:00:27.853787 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:00:27.853803 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:00:27.853811 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:00:27.853819 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:00:27.853828 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:00:27.853838 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:00:27.853846 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:00:27.853854 kernel: kvm-guest: setup PV sched yield Jul 7 06:00:27.853861 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 06:00:27.853871 kernel: Booting paravirtualized kernel on KVM Jul 7 06:00:27.853879 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:00:27.853887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:00:27.853895 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:00:27.853903 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:00:27.853910 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:00:27.853918 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:00:27.853926 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:00:27.853934 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:00:27.853945 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:00:27.853953 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:00:27.853960 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:00:27.853968 kernel: Fallback order for Node 0: 0 Jul 7 06:00:27.853976 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 7 06:00:27.853983 kernel: Policy zone: DMA32 Jul 7 06:00:27.853991 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:00:27.853999 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:00:27.854009 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:00:27.854017 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:00:27.854024 kernel: Dynamic Preempt: voluntary Jul 7 06:00:27.854032 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:00:27.854040 kernel: rcu: RCU event tracing is enabled. Jul 7 06:00:27.854048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:00:27.854056 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:00:27.854064 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:00:27.854072 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:00:27.854082 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:00:27.854090 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:00:27.854098 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:00:27.854106 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:00:27.854116 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:00:27.854124 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:00:27.854132 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:00:27.854140 kernel: Console: colour dummy device 80x25 Jul 7 06:00:27.854148 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:00:27.854157 kernel: ACPI: Core revision 20240827 Jul 7 06:00:27.854165 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:00:27.854173 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:00:27.854181 kernel: x2apic enabled Jul 7 06:00:27.854188 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:00:27.854196 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:00:27.854204 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:00:27.854212 kernel: kvm-guest: setup PV IPIs Jul 7 06:00:27.854219 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:00:27.854230 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:00:27.854237 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 7 06:00:27.854252 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:00:27.854260 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:00:27.854267 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:00:27.854277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:00:27.854285 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:00:27.854293 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:00:27.854304 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:00:27.854317 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:00:27.854334 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:00:27.854350 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:00:27.854361 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:00:27.854369 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:00:27.854377 kernel: x86/bugs: return thunk changed Jul 7 06:00:27.854384 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:00:27.854392 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:00:27.854402 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:00:27.854410 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:00:27.854418 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:00:27.854426 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:00:27.854433 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:00:27.854441 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:00:27.854449 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:00:27.854456 kernel: landlock: Up and running. Jul 7 06:00:27.854464 kernel: SELinux: Initializing. Jul 7 06:00:27.854474 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:00:27.854482 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:00:27.854490 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:00:27.854497 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:00:27.854518 kernel: ... version: 0 Jul 7 06:00:27.854525 kernel: ... bit width: 48 Jul 7 06:00:27.854535 kernel: ... generic registers: 6 Jul 7 06:00:27.854543 kernel: ... value mask: 0000ffffffffffff Jul 7 06:00:27.854551 kernel: ... max period: 00007fffffffffff Jul 7 06:00:27.854561 kernel: ... fixed-purpose events: 0 Jul 7 06:00:27.854569 kernel: ... event mask: 000000000000003f Jul 7 06:00:27.854576 kernel: signal: max sigframe size: 1776 Jul 7 06:00:27.854584 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:00:27.854592 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:00:27.854600 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:00:27.854617 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:00:27.854626 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:00:27.854642 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:00:27.854650 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:00:27.854661 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 7 06:00:27.854669 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137064K reserved, 0K cma-reserved) Jul 7 06:00:27.854677 kernel: devtmpfs: initialized Jul 7 06:00:27.854685 kernel: x86/mm: Memory block size: 128MB Jul 7 06:00:27.854693 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 7 06:00:27.854700 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 7 06:00:27.854708 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:00:27.854716 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:00:27.854726 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:00:27.854734 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:00:27.854742 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:00:27.854750 kernel: audit: type=2000 audit(1751868024.873:1): state=initialized audit_enabled=0 res=1 Jul 7 06:00:27.854757 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:00:27.854765 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:00:27.854773 kernel: cpuidle: using governor menu Jul 7 06:00:27.854781 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:00:27.854788 kernel: dca service started, version 1.12.1 Jul 7 06:00:27.854799 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 06:00:27.854806 kernel: PCI: Using configuration type 1 for base access Jul 7 06:00:27.854814 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:00:27.854822 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:00:27.854830 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:00:27.854838 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:00:27.854846 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:00:27.854853 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:00:27.854861 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:00:27.854871 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:00:27.854879 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:00:27.854886 kernel: ACPI: Interpreter enabled Jul 7 06:00:27.854894 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:00:27.854902 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:00:27.854909 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:00:27.854917 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:00:27.854925 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:00:27.854933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:00:27.855144 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:00:27.855281 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:00:27.855404 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:00:27.855414 kernel: PCI host bridge to bus 0000:00 Jul 7 06:00:27.855566 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:00:27.855681 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:00:27.855846 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:00:27.855969 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 06:00:27.856085 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 06:00:27.856194 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:00:27.856316 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:00:27.856533 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:00:27.856676 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:00:27.856838 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 06:00:27.856962 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 06:00:27.857081 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:00:27.857202 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:00:27.857349 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:00:27.857474 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 06:00:27.857615 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 06:00:27.857744 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 06:00:27.857882 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:00:27.858006 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 06:00:27.858127 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 06:00:27.858257 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 06:00:27.858395 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:00:27.858538 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 06:00:27.858663 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 06:00:27.858783 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 06:00:27.858904 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 06:00:27.859040 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:00:27.859166 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:00:27.859314 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:00:27.859441 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 06:00:27.859579 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 06:00:27.859714 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:00:27.859837 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 06:00:27.859847 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:00:27.859855 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:00:27.859863 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:00:27.859871 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:00:27.859883 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:00:27.859891 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:00:27.859899 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:00:27.859907 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:00:27.859915 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:00:27.859922 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:00:27.859930 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:00:27.859938 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:00:27.859946 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:00:27.859956 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:00:27.859964 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:00:27.859971 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:00:27.859979 kernel: iommu: Default domain type: Translated Jul 7 06:00:27.859987 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:00:27.859995 kernel: efivars: Registered efivars operations Jul 7 06:00:27.860002 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:00:27.860010 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:00:27.860018 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 7 06:00:27.860028 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Jul 7 06:00:27.860036 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Jul 7 06:00:27.860043 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 7 06:00:27.860051 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 7 06:00:27.860173 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:00:27.860304 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:00:27.860428 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:00:27.860438 kernel: vgaarb: loaded Jul 7 06:00:27.860450 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:00:27.860458 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:00:27.860466 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:00:27.860473 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:00:27.860481 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:00:27.860489 kernel: pnp: PnP ACPI init Jul 7 06:00:27.860704 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 06:00:27.860717 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:00:27.860729 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:00:27.860737 kernel: NET: Registered PF_INET protocol family Jul 7 06:00:27.860745 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:00:27.860753 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:00:27.860761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:00:27.860769 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:00:27.860776 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:00:27.860784 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:00:27.860792 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:00:27.860802 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:00:27.860810 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:00:27.860818 kernel: NET: Registered PF_XDP protocol family Jul 7 06:00:27.860942 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 06:00:27.861064 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 06:00:27.861177 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:00:27.861309 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:00:27.861422 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:00:27.861558 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 06:00:27.861670 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 06:00:27.861781 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:00:27.861791 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:00:27.861799 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:00:27.861807 kernel: Initialise system trusted keyrings Jul 7 06:00:27.861815 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:00:27.861823 kernel: Key type asymmetric registered Jul 7 06:00:27.861831 kernel: Asymmetric key parser 'x509' registered Jul 7 06:00:27.861843 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:00:27.861868 kernel: io scheduler mq-deadline registered Jul 7 06:00:27.861878 kernel: io scheduler kyber registered Jul 7 06:00:27.861886 kernel: io scheduler bfq registered Jul 7 06:00:27.861894 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:00:27.861903 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:00:27.861911 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:00:27.861919 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:00:27.861927 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:00:27.861937 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:00:27.861946 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:00:27.861954 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:00:27.861962 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:00:27.862095 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:00:27.862108 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:00:27.862225 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:00:27.862350 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:00:27 UTC (1751868027) Jul 7 06:00:27.862470 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 06:00:27.862480 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:00:27.862492 kernel: efifb: probing for efifb Jul 7 06:00:27.862519 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 06:00:27.862539 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 06:00:27.862548 kernel: efifb: scrolling: redraw Jul 7 06:00:27.862556 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:00:27.862564 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 06:00:27.862572 kernel: fb0: EFI VGA frame buffer device Jul 7 06:00:27.862584 kernel: pstore: Using crash dump compression: deflate Jul 7 06:00:27.862592 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:00:27.862603 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:00:27.862611 kernel: Segment Routing with IPv6 Jul 7 06:00:27.862619 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:00:27.862630 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:00:27.862638 kernel: Key type dns_resolver registered Jul 7 06:00:27.862646 kernel: IPI shorthand broadcast: enabled Jul 7 06:00:27.862655 kernel: sched_clock: Marking stable (3643002446, 139606174)->(3818251044, -35642424) Jul 7 06:00:27.862663 kernel: registered taskstats version 1 Jul 7 06:00:27.862671 kernel: Loading compiled-in X.509 certificates Jul 7 06:00:27.862679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:00:27.862687 kernel: Demotion targets for Node 0: null Jul 7 06:00:27.862695 kernel: Key type .fscrypt registered Jul 7 06:00:27.862706 kernel: Key type fscrypt-provisioning registered Jul 7 06:00:27.862714 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:00:27.862722 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:00:27.862730 kernel: ima: No architecture policies found Jul 7 06:00:27.862738 kernel: clk: Disabling unused clocks Jul 7 06:00:27.862746 kernel: Warning: unable to open an initial console. Jul 7 06:00:27.862755 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:00:27.862763 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:00:27.862771 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:00:27.862781 kernel: Run /init as init process Jul 7 06:00:27.862790 kernel: with arguments: Jul 7 06:00:27.862798 kernel: /init Jul 7 06:00:27.862806 kernel: with environment: Jul 7 06:00:27.862814 kernel: HOME=/ Jul 7 06:00:27.862822 kernel: TERM=linux Jul 7 06:00:27.862830 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:00:27.862839 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:00:27.862853 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:00:27.862862 systemd[1]: Detected virtualization kvm. Jul 7 06:00:27.862871 systemd[1]: Detected architecture x86-64. Jul 7 06:00:27.862879 systemd[1]: Running in initrd. Jul 7 06:00:27.862888 systemd[1]: No hostname configured, using default hostname. Jul 7 06:00:27.862897 systemd[1]: Hostname set to . Jul 7 06:00:27.862905 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:00:27.862914 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:00:27.862925 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:27.862934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:27.862944 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:00:27.862953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:00:27.862961 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:00:27.862971 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:00:27.862983 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:00:27.862992 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:00:27.863001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:27.863010 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:27.863018 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:00:27.863027 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:00:27.863036 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:00:27.863047 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:00:27.863055 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:00:27.863066 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:00:27.863075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:00:27.863084 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:00:27.863093 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:27.863101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:27.863110 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:27.863119 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:00:27.863127 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:00:27.863139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:00:27.863147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:00:27.863156 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:00:27.863165 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:00:27.863173 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:00:27.863182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:00:27.863191 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:27.863200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:00:27.863212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:27.863220 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:00:27.863229 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:00:27.863272 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 06:00:27.863296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:27.863305 systemd-journald[220]: Journal started Jul 7 06:00:27.863325 systemd-journald[220]: Runtime Journal (/run/log/journal/82e54b7e4f6948ae8d072d301a095494) is 6M, max 48.2M, 42.2M free. Jul 7 06:00:27.854662 systemd-modules-load[223]: Inserted module 'overlay' Jul 7 06:00:27.865556 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:00:27.871384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:00:27.875482 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:00:27.884524 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:00:27.885753 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:00:27.888856 systemd-modules-load[223]: Inserted module 'br_netfilter' Jul 7 06:00:27.889798 kernel: Bridge firewalling registered Jul 7 06:00:27.890143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:00:27.890866 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:27.893217 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:00:27.893898 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:00:27.898912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:27.909729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:27.910302 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:27.912997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:00:27.916662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:27.919373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:00:27.940516 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:00:27.958038 systemd-resolved[260]: Positive Trust Anchors: Jul 7 06:00:27.958053 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:00:27.958085 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:00:27.960758 systemd-resolved[260]: Defaulting to hostname 'linux'. Jul 7 06:00:27.962020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:00:27.971379 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:28.071580 kernel: SCSI subsystem initialized Jul 7 06:00:28.080525 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:00:28.096527 kernel: iscsi: registered transport (tcp) Jul 7 06:00:28.123543 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:00:28.123576 kernel: QLogic iSCSI HBA Driver Jul 7 06:00:28.144149 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:00:28.193268 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:00:28.209130 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:00:28.273872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:00:28.279319 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:00:28.341546 kernel: raid6: avx2x4 gen() 28773 MB/s Jul 7 06:00:28.358543 kernel: raid6: avx2x2 gen() 29884 MB/s Jul 7 06:00:28.375845 kernel: raid6: avx2x1 gen() 22073 MB/s Jul 7 06:00:28.375886 kernel: raid6: using algorithm avx2x2 gen() 29884 MB/s Jul 7 06:00:28.393731 kernel: raid6: .... xor() 17714 MB/s, rmw enabled Jul 7 06:00:28.393775 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:00:28.415546 kernel: xor: automatically using best checksumming function avx Jul 7 06:00:28.604568 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:00:28.614764 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:00:28.617558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:28.657775 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 7 06:00:28.666100 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:28.667727 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:00:28.694464 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jul 7 06:00:28.729199 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:00:28.733921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:00:28.835071 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:28.840589 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:00:28.878138 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:00:28.881732 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:00:28.892632 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:00:28.892675 kernel: GPT:9289727 != 19775487 Jul 7 06:00:28.892687 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:00:28.892698 kernel: GPT:9289727 != 19775487 Jul 7 06:00:28.892708 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:00:28.892718 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:28.902526 kernel: libata version 3.00 loaded. Jul 7 06:00:28.907531 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:00:28.909777 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:00:28.910015 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:00:28.909342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:00:28.916800 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:00:28.916970 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:00:28.917120 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:00:28.909482 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:28.914704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:28.916472 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:28.923014 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:00:28.926884 kernel: scsi host0: ahci Jul 7 06:00:28.927285 kernel: scsi host1: ahci Jul 7 06:00:28.927498 kernel: scsi host2: ahci Jul 7 06:00:28.928838 kernel: scsi host3: ahci Jul 7 06:00:28.932726 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 06:00:28.933642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:00:28.934063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:28.937757 kernel: scsi host4: ahci Jul 7 06:00:28.938000 kernel: AES CTR mode by8 optimization enabled Jul 7 06:00:28.944535 kernel: scsi host5: ahci Jul 7 06:00:28.952131 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 lpm-pol 0 Jul 7 06:00:28.952201 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 lpm-pol 0 Jul 7 06:00:28.952223 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 lpm-pol 0 Jul 7 06:00:28.952234 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 lpm-pol 0 Jul 7 06:00:28.952245 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 lpm-pol 0 Jul 7 06:00:28.952256 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 lpm-pol 0 Jul 7 06:00:28.951372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:28.976293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:00:28.993246 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:00:28.994795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:29.005193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:00:29.018313 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:00:29.018780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:00:29.020010 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:00:29.051045 disk-uuid[628]: Primary Header is updated. Jul 7 06:00:29.051045 disk-uuid[628]: Secondary Entries is updated. Jul 7 06:00:29.051045 disk-uuid[628]: Secondary Header is updated. Jul 7 06:00:29.055540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:29.061550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:29.263851 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:00:29.263947 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:00:29.263959 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:00:29.265547 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:00:29.265631 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:00:29.266533 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:00:29.267540 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:00:29.267558 kernel: ata3.00: applying bridge limits Jul 7 06:00:29.268542 kernel: ata3.00: configured for UDMA/100 Jul 7 06:00:29.270525 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:00:29.330536 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:00:29.330775 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:00:29.356772 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:00:29.730353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:00:29.733758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:00:29.736590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:29.739229 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:00:29.742774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:00:29.772245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:00:30.061543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:00:30.062355 disk-uuid[629]: The operation has completed successfully. Jul 7 06:00:30.097388 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:00:30.097541 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:00:30.133247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:00:30.159790 sh[669]: Success Jul 7 06:00:30.180690 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:00:30.180731 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:00:30.181937 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:00:30.191535 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:00:30.226720 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:00:30.230938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:00:30.246092 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:00:30.252765 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:00:30.252801 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (681) Jul 7 06:00:30.253532 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:00:30.255120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:00:30.255136 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:00:30.262220 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:00:30.265266 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:00:30.268142 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:00:30.269337 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:00:30.272745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:00:30.305546 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Jul 7 06:00:30.305618 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:00:30.307591 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:00:30.307628 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:00:30.316528 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:00:30.316979 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:00:30.319190 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:00:30.487781 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:00:30.493644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:00:30.537442 ignition[752]: Ignition 2.21.0 Jul 7 06:00:30.537455 ignition[752]: Stage: fetch-offline Jul 7 06:00:30.537486 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:30.537495 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:30.537598 ignition[752]: parsed url from cmdline: "" Jul 7 06:00:30.537602 ignition[752]: no config URL provided Jul 7 06:00:30.537607 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:00:30.537615 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:00:30.537644 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 7 06:00:30.537649 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:00:30.549043 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 7 06:00:30.565286 systemd-networkd[856]: lo: Link UP Jul 7 06:00:30.565298 systemd-networkd[856]: lo: Gained carrier Jul 7 06:00:30.567223 systemd-networkd[856]: Enumeration completed Jul 7 06:00:30.567618 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:00:30.569145 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:30.569151 systemd-networkd[856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:00:30.569489 systemd[1]: Reached target network.target - Network. Jul 7 06:00:30.569868 systemd-networkd[856]: eth0: Link UP Jul 7 06:00:30.569874 systemd-networkd[856]: eth0: Gained carrier Jul 7 06:00:30.569884 systemd-networkd[856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:30.586595 systemd-networkd[856]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:00:30.607585 ignition[752]: parsing config with SHA512: 568d1ff29f458cc55941b8786841e6b7f306f265f38af7620fd28ad8d8b9b4556f6355e9b22858fd64dcd50ac32fec8116e3ac8e3350b7f86786f1854145d756 Jul 7 06:00:30.614711 unknown[752]: fetched base config from "system" Jul 7 06:00:30.615208 unknown[752]: fetched user config from "qemu" Jul 7 06:00:30.615700 ignition[752]: fetch-offline: fetch-offline passed Jul 7 06:00:30.615794 ignition[752]: Ignition finished successfully Jul 7 06:00:30.619278 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:00:30.621815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:00:30.622940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:00:30.680995 ignition[864]: Ignition 2.21.0 Jul 7 06:00:30.681014 ignition[864]: Stage: kargs Jul 7 06:00:30.682873 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:30.682895 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:30.686122 ignition[864]: kargs: kargs passed Jul 7 06:00:30.686213 ignition[864]: Ignition finished successfully Jul 7 06:00:30.694064 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:00:30.695862 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:00:30.744757 ignition[872]: Ignition 2.21.0 Jul 7 06:00:30.744772 ignition[872]: Stage: disks Jul 7 06:00:30.744953 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:30.744966 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:30.750957 ignition[872]: disks: disks passed Jul 7 06:00:30.751038 ignition[872]: Ignition finished successfully Jul 7 06:00:30.755523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:00:30.757748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:00:30.758247 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:00:30.758737 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:00:30.759055 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:00:30.759389 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:00:30.761126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:00:30.801863 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:00:30.837801 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:00:30.840757 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:00:30.957534 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:00:30.957789 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:00:30.959990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:00:30.963413 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:00:30.965272 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:00:30.965934 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:00:30.965973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:00:30.965996 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:00:30.981636 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:00:30.983618 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:00:30.990142 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Jul 7 06:00:30.990189 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:00:30.990207 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:00:30.991177 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:00:30.996009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:00:31.023526 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:00:31.028923 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:00:31.033029 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:00:31.037985 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:00:31.138744 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:00:31.141058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:00:31.142731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:00:31.163531 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:00:31.176212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:00:31.194371 ignition[1006]: INFO : Ignition 2.21.0 Jul 7 06:00:31.194371 ignition[1006]: INFO : Stage: mount Jul 7 06:00:31.196075 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:31.196075 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:31.198672 ignition[1006]: INFO : mount: mount passed Jul 7 06:00:31.199464 ignition[1006]: INFO : Ignition finished successfully Jul 7 06:00:31.202735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:00:31.204096 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:00:31.252140 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:00:31.254055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:00:31.286536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Jul 7 06:00:31.288598 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:00:31.288621 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:00:31.288632 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:00:31.292730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:00:31.329945 ignition[1035]: INFO : Ignition 2.21.0 Jul 7 06:00:31.329945 ignition[1035]: INFO : Stage: files Jul 7 06:00:31.331672 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:31.331672 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:31.335246 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:00:31.337562 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:00:31.337562 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:00:31.340652 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:00:31.342342 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:00:31.342342 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:00:31.341190 unknown[1035]: wrote ssh authorized keys file for user: core Jul 7 06:00:31.346474 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:00:31.346474 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 06:00:31.389669 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:00:31.594893 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:00:31.597212 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:00:31.612409 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 06:00:31.688722 systemd-networkd[856]: eth0: Gained IPv6LL Jul 7 06:00:32.383009 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:00:33.668161 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 06:00:33.668161 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:00:33.672801 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:00:33.674946 ignition[1035]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:00:33.708013 ignition[1035]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:00:33.712555 ignition[1035]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:00:33.714299 ignition[1035]: INFO : files: files passed Jul 7 06:00:33.714299 ignition[1035]: INFO : Ignition finished successfully Jul 7 06:00:33.721575 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:00:33.725989 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:00:33.728828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:00:33.755061 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:00:33.755232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:00:33.759333 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:00:33.762667 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:33.762667 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:33.767265 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:00:33.769444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:00:33.770158 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:00:33.774371 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:00:33.840145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:00:33.840308 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:00:33.841136 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:00:33.845577 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:00:33.845900 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:00:33.846958 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:00:33.875258 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:00:33.878303 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:00:33.902407 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:33.902802 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:33.903231 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:00:33.903924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:00:33.904047 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:00:33.912192 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:00:33.914623 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:00:33.916842 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:00:33.917543 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:00:33.918121 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:00:33.918554 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:00:33.919123 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:00:33.919533 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:00:33.929496 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:00:33.930064 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:00:33.930402 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:00:33.930849 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:00:33.930960 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:00:33.938239 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:33.940290 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:33.941003 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:00:33.941164 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:33.941499 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:00:33.941682 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:00:33.949020 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:00:33.949162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:00:33.949867 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:00:33.950116 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:00:33.957637 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:33.958047 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:00:33.958377 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:00:33.958866 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:00:33.958973 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:00:33.963966 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:00:33.964053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:00:33.965905 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:00:33.966116 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:00:33.967562 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:00:33.967683 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:00:33.972154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:00:33.973736 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:00:33.975405 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:00:33.975565 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:33.975968 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:00:33.976083 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:00:33.988849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:00:33.999729 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:00:34.016579 ignition[1091]: INFO : Ignition 2.21.0 Jul 7 06:00:34.016579 ignition[1091]: INFO : Stage: umount Jul 7 06:00:34.019386 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:00:34.019386 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:00:34.023916 ignition[1091]: INFO : umount: umount passed Jul 7 06:00:34.023916 ignition[1091]: INFO : Ignition finished successfully Jul 7 06:00:34.024339 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:00:34.024569 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:00:34.026564 systemd[1]: Stopped target network.target - Network. Jul 7 06:00:34.027702 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:00:34.027767 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:00:34.030304 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:00:34.030416 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:00:34.032069 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:00:34.032151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:00:34.034691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:00:34.034763 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:00:34.036171 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:00:34.036613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:00:34.038279 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:00:34.045777 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:00:34.046015 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:00:34.051162 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:00:34.051473 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:00:34.051630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:00:34.054737 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:00:34.055693 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:00:34.056879 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:00:34.056943 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:34.060084 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:00:34.061369 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:00:34.061424 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:00:34.061985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:00:34.062033 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:34.068821 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:00:34.068913 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:34.069591 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:00:34.069652 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:34.111109 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:34.113054 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:00:34.113151 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:00:34.132482 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:00:34.132645 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:00:34.137363 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:00:34.137581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:34.138238 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:00:34.138286 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:34.140863 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:00:34.140908 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:34.142824 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:00:34.142879 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:00:34.145279 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:00:34.145330 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:00:34.146065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:00:34.146121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:00:34.155262 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:00:34.156459 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:00:34.156581 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:00:34.160145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:00:34.160209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:34.163647 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:00:34.163702 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:00:34.166982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:00:34.167039 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:34.167726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:00:34.167772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:34.173653 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:00:34.173712 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 06:00:34.173755 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:00:34.173812 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:00:34.185625 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:00:34.185747 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:00:34.204620 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:00:34.204748 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:00:34.205589 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:00:34.205927 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:00:34.205978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:00:34.207184 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:00:34.227479 systemd[1]: Switching root. Jul 7 06:00:34.260693 systemd-journald[220]: Journal stopped Jul 7 06:00:35.851985 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 06:00:35.852095 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:00:35.852115 kernel: SELinux: policy capability open_perms=1 Jul 7 06:00:35.852139 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:00:35.852154 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:00:35.852169 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:00:35.852184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:00:35.852198 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:00:35.852221 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:00:35.852236 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:00:35.852252 kernel: audit: type=1403 audit(1751868035.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:00:35.852269 systemd[1]: Successfully loaded SELinux policy in 50.389ms. Jul 7 06:00:35.852298 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.284ms. Jul 7 06:00:35.852316 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:00:35.852332 systemd[1]: Detected virtualization kvm. Jul 7 06:00:35.852349 systemd[1]: Detected architecture x86-64. Jul 7 06:00:35.852367 systemd[1]: Detected first boot. Jul 7 06:00:35.852383 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:00:35.852398 zram_generator::config[1136]: No configuration found. Jul 7 06:00:35.852415 kernel: Guest personality initialized and is inactive Jul 7 06:00:35.852434 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:00:35.852448 kernel: Initialized host personality Jul 7 06:00:35.852464 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:00:35.852479 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:00:35.852496 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:00:35.852534 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:00:35.852551 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:00:35.852568 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:00:35.852591 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:00:35.852611 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:00:35.852627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:00:35.852642 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:00:35.852658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:00:35.852680 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:00:35.852692 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:00:35.852709 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:00:35.852721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:00:35.852734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:00:35.852748 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:00:35.852761 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:00:35.852774 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:00:35.852787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:00:35.852799 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:00:35.852811 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:00:35.852823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:00:35.852838 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:00:35.852851 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:00:35.852863 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:00:35.852875 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:00:35.852887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:00:35.852899 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:00:35.852911 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:00:35.852923 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:00:35.852936 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:00:35.852950 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:00:35.852962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:00:35.852975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:00:35.852987 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:00:35.853000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:00:35.853012 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:00:35.853024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:00:35.853036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:00:35.853057 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:00:35.853073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:35.853086 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:00:35.853098 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:00:35.853110 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:00:35.853123 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:00:35.853135 systemd[1]: Reached target machines.target - Containers. Jul 7 06:00:35.853147 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:00:35.853159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:35.853172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:00:35.853187 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:00:35.853199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:35.853211 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:00:35.853223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:35.853235 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:00:35.853247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:35.853268 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:00:35.853282 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:00:35.853299 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:00:35.853316 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:00:35.853336 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:00:35.853353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:00:35.853369 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:00:35.853384 kernel: fuse: init (API version 7.41) Jul 7 06:00:35.853399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:00:35.853411 kernel: loop: module loaded Jul 7 06:00:35.853423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:00:35.853438 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:00:35.853451 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:00:35.853466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:00:35.853480 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:00:35.853494 systemd[1]: Stopped verity-setup.service. Jul 7 06:00:35.853531 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:35.853544 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:00:35.853557 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:00:35.853569 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:00:35.853582 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:00:35.853597 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:00:35.853609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:00:35.853621 kernel: ACPI: bus type drm_connector registered Jul 7 06:00:35.853633 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:00:35.853645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:00:35.853683 systemd-journald[1207]: Collecting audit messages is disabled. Jul 7 06:00:35.853705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:00:35.853721 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:00:35.853734 systemd-journald[1207]: Journal started Jul 7 06:00:35.853757 systemd-journald[1207]: Runtime Journal (/run/log/journal/82e54b7e4f6948ae8d072d301a095494) is 6M, max 48.2M, 42.2M free. Jul 7 06:00:35.567561 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:00:35.595848 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:00:35.596520 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:00:35.857563 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:00:35.859314 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:35.859688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:35.861150 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:00:35.861392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:00:35.862949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:35.863191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:35.864728 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:00:35.864979 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:00:35.866643 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:35.866893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:35.868492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:00:35.869966 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:00:35.871673 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:00:35.873355 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:00:35.891285 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:00:35.894625 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:00:35.899609 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:00:35.900900 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:00:35.900935 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:00:35.903249 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:00:35.907741 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:00:35.909257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:35.911418 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:00:35.917300 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:00:35.918737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:00:35.919911 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:00:35.921313 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:00:35.922735 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:00:35.936797 systemd-journald[1207]: Time spent on flushing to /var/log/journal/82e54b7e4f6948ae8d072d301a095494 is 17.961ms for 1038 entries. Jul 7 06:00:35.936797 systemd-journald[1207]: System Journal (/var/log/journal/82e54b7e4f6948ae8d072d301a095494) is 8M, max 195.6M, 187.6M free. Jul 7 06:00:36.679015 systemd-journald[1207]: Received client request to flush runtime journal. Jul 7 06:00:36.679164 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 06:00:36.679250 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:00:36.679417 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 06:00:36.679533 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:00:36.679567 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:00:35.927706 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:00:35.932597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:00:35.935622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:00:35.937957 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:00:35.968429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:00:36.064135 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 7 06:00:36.064153 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 7 06:00:36.073281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:00:36.232440 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:00:36.236008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:00:36.238789 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:00:36.242054 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:00:36.244752 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:00:36.633803 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:00:36.637065 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:00:36.682807 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 7 06:00:36.682823 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jul 7 06:00:36.683356 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:00:36.690472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:00:36.733544 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 06:00:36.739556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:00:36.740514 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:00:36.750641 kernel: loop5: detected capacity change from 0 to 113872 Jul 7 06:00:36.762303 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:00:36.763083 (sd-merge)[1273]: Merged extensions into '/usr'. Jul 7 06:00:36.768648 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:00:36.768669 systemd[1]: Reloading... Jul 7 06:00:36.835561 zram_generator::config[1306]: No configuration found. Jul 7 06:00:37.055339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:00:37.154131 systemd[1]: Reloading finished in 384 ms. Jul 7 06:00:37.182113 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:00:37.211354 systemd[1]: Starting ensure-sysext.service... Jul 7 06:00:37.213617 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:00:37.240419 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:00:37.240467 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:00:37.240838 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:00:37.241170 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:00:37.242242 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:00:37.242711 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 7 06:00:37.242806 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 7 06:00:37.282591 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:00:37.282605 systemd-tmpfiles[1343]: Skipping /boot Jul 7 06:00:37.284856 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:00:37.284879 systemd[1]: Reloading... Jul 7 06:00:37.295560 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:00:37.299601 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:00:37.299617 systemd-tmpfiles[1343]: Skipping /boot Jul 7 06:00:37.337545 zram_generator::config[1374]: No configuration found. Jul 7 06:00:37.457235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:00:37.558384 systemd[1]: Reloading finished in 273 ms. Jul 7 06:00:37.579731 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:00:37.581534 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:00:37.602872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:00:37.615793 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:00:37.618700 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:00:37.621493 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:00:37.629704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:00:37.633930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:00:37.638703 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:00:37.642898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.643101 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:37.645371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:37.648528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:37.651846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:37.653203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:37.653320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:00:37.663862 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:00:37.664984 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.667157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:00:37.669265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:37.669535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:37.676051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:37.678830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:37.679303 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Jul 7 06:00:37.681040 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:37.681284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:37.688992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.689281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:37.693779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:37.697202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:37.702927 augenrules[1446]: No rules Jul 7 06:00:37.704795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:37.706153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:37.706304 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:00:37.707861 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:00:37.709116 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.711291 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:00:37.711685 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:00:37.713948 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:00:37.716125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:37.716375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:37.717462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:37.717759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:37.719861 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:37.724709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:37.726686 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:00:37.739752 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:00:37.748629 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:00:37.763711 systemd[1]: Finished ensure-sysext.service. Jul 7 06:00:37.765197 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:00:37.773450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.776800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:00:37.778170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:00:37.781662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:00:37.784679 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:00:37.791734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:00:37.799669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:00:37.801084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:00:37.801136 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:00:37.804404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:00:37.808447 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:00:37.808931 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:00:37.808966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:00:37.812230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:00:37.813719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:00:37.815436 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:00:37.815745 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:00:37.820249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:00:37.821029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:00:37.825489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:00:37.831286 augenrules[1492]: /sbin/augenrules: No change Jul 7 06:00:37.839596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:00:37.841039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:00:37.842718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:00:37.850372 augenrules[1522]: No rules Jul 7 06:00:37.852635 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:00:37.853611 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:00:37.901645 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:00:37.954172 systemd-resolved[1414]: Positive Trust Anchors: Jul 7 06:00:37.954403 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:00:37.954435 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:00:37.958747 systemd-resolved[1414]: Defaulting to hostname 'linux'. Jul 7 06:00:37.963209 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:00:37.964542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:00:37.971535 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:00:37.983538 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 06:00:37.994539 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:00:37.999861 systemd-networkd[1504]: lo: Link UP Jul 7 06:00:37.999877 systemd-networkd[1504]: lo: Gained carrier Jul 7 06:00:38.000636 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:00:38.003490 systemd-networkd[1504]: Enumeration completed Jul 7 06:00:38.003840 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:00:38.004141 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:38.004218 systemd-networkd[1504]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:00:38.004915 systemd-networkd[1504]: eth0: Link UP Jul 7 06:00:38.005357 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:00:38.005461 systemd-networkd[1504]: eth0: Gained carrier Jul 7 06:00:38.005540 systemd-networkd[1504]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:00:38.007009 systemd[1]: Reached target network.target - Network. Jul 7 06:00:38.009430 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:00:38.017237 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:00:38.029214 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 06:00:38.029577 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:00:38.029203 systemd-networkd[1504]: eth0: DHCPv4 address 10.0.0.17/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:00:38.032544 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:00:38.036925 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:00:38.037350 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:00:38.037854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:00:38.038144 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:00:38.038456 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:00:38.038820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:00:38.039141 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:00:38.039169 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:00:38.039489 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:00:38.040918 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:00:38.041454 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:00:38.041905 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:00:38.050625 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:00:38.056217 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:00:38.062549 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:00:38.066949 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:00:38.069643 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:00:38.989059 systemd-resolved[1414]: Clock change detected. Flushing caches. Jul 7 06:00:38.989251 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:00:38.989313 systemd-timesyncd[1505]: Initial clock synchronization to Mon 2025-07-07 06:00:38.988995 UTC. Jul 7 06:00:38.994047 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:00:39.018283 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:00:39.022266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:00:39.023989 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:00:39.025628 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:00:39.034132 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:00:39.035336 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:00:39.037067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:00:39.037106 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:00:39.042108 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:00:39.046113 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:00:39.051290 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:00:39.054887 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:00:39.059087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:00:39.064722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:00:39.066317 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:00:39.079053 jq[1567]: false Jul 7 06:00:39.098466 extend-filesystems[1568]: Found /dev/vda6 Jul 7 06:00:39.099613 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 7 06:00:39.099261 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Jul 7 06:00:39.103744 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:00:39.106499 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:00:39.108709 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:00:39.116711 extend-filesystems[1568]: Found /dev/vda9 Jul 7 06:00:39.115742 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:00:39.119006 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 7 06:00:39.116915 oslogin_cache_refresh[1569]: Failure getting users, quitting Jul 7 06:00:39.122953 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:00:39.122953 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 7 06:00:39.120986 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:00:39.121052 oslogin_cache_refresh[1569]: Refreshing group entry cache Jul 7 06:00:39.129768 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 7 06:00:39.129768 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:00:39.129849 extend-filesystems[1568]: Checking size of /dev/vda9 Jul 7 06:00:39.128377 oslogin_cache_refresh[1569]: Failure getting groups, quitting Jul 7 06:00:39.125165 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:00:39.128388 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:00:39.127706 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:00:39.131756 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:00:39.133414 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:00:39.136280 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:00:39.141478 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:00:39.143742 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:00:39.145273 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:00:39.145678 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:00:39.145992 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:00:39.148162 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:00:39.148451 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:00:39.159279 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:00:39.165259 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:00:39.178274 extend-filesystems[1568]: Resized partition /dev/vda9 Jul 7 06:00:39.184061 extend-filesystems[1602]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:00:39.190987 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:00:39.203597 jq[1587]: true Jul 7 06:00:39.203375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:00:39.205540 update_engine[1586]: I20250707 06:00:39.204532 1586 main.cc:92] Flatcar Update Engine starting Jul 7 06:00:39.211427 (ntainerd)[1605]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:00:39.236981 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:00:39.242080 kernel: kvm_amd: TSC scaling supported Jul 7 06:00:39.242122 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:00:39.242136 kernel: kvm_amd: Nested Paging enabled Jul 7 06:00:39.311079 kernel: kvm_amd: LBR virtualization supported Jul 7 06:00:39.311157 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:00:39.311173 kernel: kvm_amd: Virtual GIF supported Jul 7 06:00:39.311187 update_engine[1586]: I20250707 06:00:39.279284 1586 update_check_scheduler.cc:74] Next update check in 4m56s Jul 7 06:00:39.275934 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:00:39.273529 dbus-daemon[1564]: [system] SELinux support is enabled Jul 7 06:00:39.321319 tar[1591]: linux-amd64/helm Jul 7 06:00:39.321600 extend-filesystems[1602]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:00:39.321600 extend-filesystems[1602]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:00:39.321600 extend-filesystems[1602]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:00:39.313230 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:00:39.323895 jq[1609]: true Jul 7 06:00:39.324309 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Jul 7 06:00:39.313257 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:00:39.313588 systemd-logind[1584]: New seat seat0. Jul 7 06:00:39.326914 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:00:39.337422 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:00:39.357373 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:00:39.446059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:00:39.450561 dbus-daemon[1564]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:00:39.462426 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:00:39.467788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:00:39.467993 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:00:39.469424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:00:39.469555 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:00:39.473268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:00:39.486641 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:00:39.490415 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:00:39.491452 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:00:39.495651 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:00:39.544840 sshd_keygen[1606]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:00:39.549309 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:00:39.572587 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:00:39.576179 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:00:39.601869 containerd[1605]: time="2025-07-07T06:00:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:00:39.603274 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:00:39.603639 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:00:39.604972 containerd[1605]: time="2025-07-07T06:00:39.604471530Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:00:39.611825 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:00:39.619487 containerd[1605]: time="2025-07-07T06:00:39.619408693Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.422µs" Jul 7 06:00:39.619487 containerd[1605]: time="2025-07-07T06:00:39.619465580Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:00:39.619648 containerd[1605]: time="2025-07-07T06:00:39.619513229Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:00:39.619789 containerd[1605]: time="2025-07-07T06:00:39.619749011Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:00:39.619832 containerd[1605]: time="2025-07-07T06:00:39.619798444Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:00:39.619907 containerd[1605]: time="2025-07-07T06:00:39.619833690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620009 containerd[1605]: time="2025-07-07T06:00:39.619910734Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620009 containerd[1605]: time="2025-07-07T06:00:39.619954286Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620372 containerd[1605]: time="2025-07-07T06:00:39.620334309Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620372 containerd[1605]: time="2025-07-07T06:00:39.620364115Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620458 containerd[1605]: time="2025-07-07T06:00:39.620378081Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620458 containerd[1605]: time="2025-07-07T06:00:39.620387268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620528 containerd[1605]: time="2025-07-07T06:00:39.620488057Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620760 containerd[1605]: time="2025-07-07T06:00:39.620730402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620812 containerd[1605]: time="2025-07-07T06:00:39.620799712Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:00:39.620812 containerd[1605]: time="2025-07-07T06:00:39.620811223Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:00:39.620869 containerd[1605]: time="2025-07-07T06:00:39.620845798Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:00:39.621135 containerd[1605]: time="2025-07-07T06:00:39.621100506Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:00:39.621215 containerd[1605]: time="2025-07-07T06:00:39.621185024Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:00:39.660133 containerd[1605]: time="2025-07-07T06:00:39.660049629Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660309366Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660337008Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660355463Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660373006Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660386711Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660413822Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660427107Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660440432Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:00:39.660485 containerd[1605]: time="2025-07-07T06:00:39.660453887Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:00:39.660821 containerd[1605]: time="2025-07-07T06:00:39.660465108Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:00:39.661145 containerd[1605]: time="2025-07-07T06:00:39.660791911Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:00:39.661299 containerd[1605]: time="2025-07-07T06:00:39.661275769Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:00:39.661397 containerd[1605]: time="2025-07-07T06:00:39.661369444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:00:39.661474 containerd[1605]: time="2025-07-07T06:00:39.661456257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:00:39.661588 containerd[1605]: time="2025-07-07T06:00:39.661541647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:00:39.661588 containerd[1605]: time="2025-07-07T06:00:39.661565492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:00:39.661588 containerd[1605]: time="2025-07-07T06:00:39.661580410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:00:39.661588 containerd[1605]: time="2025-07-07T06:00:39.661597392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:00:39.661588 containerd[1605]: time="2025-07-07T06:00:39.661612360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:00:39.661862 containerd[1605]: time="2025-07-07T06:00:39.661628480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:00:39.661862 containerd[1605]: time="2025-07-07T06:00:39.661643909Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:00:39.661862 containerd[1605]: time="2025-07-07T06:00:39.661658897Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:00:39.661862 containerd[1605]: time="2025-07-07T06:00:39.661776347Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:00:39.661862 containerd[1605]: time="2025-07-07T06:00:39.661839666Z" level=info msg="Start snapshots syncer" Jul 7 06:00:39.661975 containerd[1605]: time="2025-07-07T06:00:39.661903646Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:00:39.662327 containerd[1605]: time="2025-07-07T06:00:39.662272398Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:00:39.662494 containerd[1605]: time="2025-07-07T06:00:39.662343812Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:00:39.663361 containerd[1605]: time="2025-07-07T06:00:39.663334449Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:00:39.663531 containerd[1605]: time="2025-07-07T06:00:39.663493387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:00:39.663531 containerd[1605]: time="2025-07-07T06:00:39.663525307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:00:39.663597 containerd[1605]: time="2025-07-07T06:00:39.663538161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:00:39.663597 containerd[1605]: time="2025-07-07T06:00:39.663549944Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:00:39.663597 containerd[1605]: time="2025-07-07T06:00:39.663563990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:00:39.663597 containerd[1605]: time="2025-07-07T06:00:39.663576744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:00:39.663597 containerd[1605]: time="2025-07-07T06:00:39.663590510Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:00:39.663699 containerd[1605]: time="2025-07-07T06:00:39.663616078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:00:39.663699 containerd[1605]: time="2025-07-07T06:00:39.663629102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:00:39.663699 containerd[1605]: time="2025-07-07T06:00:39.663642016Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667047843Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667119017Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667132442Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667142892Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667150857Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667161296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667175222Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667202874Z" level=info msg="runtime interface created" Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667209336Z" level=info msg="created NRI interface" Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667218023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667237850Z" level=info msg="Connect containerd service" Jul 7 06:00:39.667957 containerd[1605]: time="2025-07-07T06:00:39.667291941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:00:39.669022 containerd[1605]: time="2025-07-07T06:00:39.668970729Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:00:39.674410 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:00:39.678900 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:00:39.684250 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:00:39.689120 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:00:39.809730 containerd[1605]: time="2025-07-07T06:00:39.809582857Z" level=info msg="Start subscribing containerd event" Jul 7 06:00:39.809730 containerd[1605]: time="2025-07-07T06:00:39.809676403Z" level=info msg="Start recovering state" Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809884273Z" level=info msg="Start event monitor" Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809948243Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809959995Z" level=info msg="Start streaming server" Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809979251Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809986685Z" level=info msg="runtime interface starting up..." Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.809992766Z" level=info msg="starting plugins..." Jul 7 06:00:39.812105 containerd[1605]: time="2025-07-07T06:00:39.810027401Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:00:39.812361 containerd[1605]: time="2025-07-07T06:00:39.812280997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:00:39.812507 containerd[1605]: time="2025-07-07T06:00:39.812412574Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:00:39.812709 containerd[1605]: time="2025-07-07T06:00:39.812581611Z" level=info msg="containerd successfully booted in 0.211309s" Jul 7 06:00:39.812858 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:00:39.887452 tar[1591]: linux-amd64/LICENSE Jul 7 06:00:39.887605 tar[1591]: linux-amd64/README.md Jul 7 06:00:39.920619 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:00:40.860153 systemd-networkd[1504]: eth0: Gained IPv6LL Jul 7 06:00:40.863756 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:00:40.865767 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:00:40.868908 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:00:40.871883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:00:40.874415 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:00:40.903236 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:00:40.905783 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:00:40.906273 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:00:40.909529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:00:42.205957 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:00:42.208981 systemd[1]: Started sshd@0-10.0.0.17:22-10.0.0.1:56446.service - OpenSSH per-connection server daemon (10.0.0.1:56446). Jul 7 06:00:42.325246 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 56446 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:42.327579 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:42.336680 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:00:42.339661 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:00:42.375480 systemd-logind[1584]: New session 1 of user core. Jul 7 06:00:42.464267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:00:42.534611 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:00:42.592228 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:00:42.595656 systemd-logind[1584]: New session c1 of user core. Jul 7 06:00:42.781412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:00:42.783227 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:00:42.798498 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:00:42.834964 systemd[1707]: Queued start job for default target default.target. Jul 7 06:00:42.852261 systemd[1707]: Created slice app.slice - User Application Slice. Jul 7 06:00:42.852301 systemd[1707]: Reached target paths.target - Paths. Jul 7 06:00:42.852368 systemd[1707]: Reached target timers.target - Timers. Jul 7 06:00:42.854503 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:00:42.867508 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:00:42.867655 systemd[1707]: Reached target sockets.target - Sockets. Jul 7 06:00:42.867708 systemd[1707]: Reached target basic.target - Basic System. Jul 7 06:00:42.867751 systemd[1707]: Reached target default.target - Main User Target. Jul 7 06:00:42.867784 systemd[1707]: Startup finished in 258ms. Jul 7 06:00:42.868484 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:00:42.871453 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:00:42.872753 systemd[1]: Startup finished in 3.706s (kernel) + 7.382s (initrd) + 6.985s (userspace) = 18.074s. Jul 7 06:00:43.144211 systemd[1]: Started sshd@1-10.0.0.17:22-10.0.0.1:56460.service - OpenSSH per-connection server daemon (10.0.0.1:56460). Jul 7 06:00:43.216834 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 56460 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:43.219513 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:43.226163 systemd-logind[1584]: New session 2 of user core. Jul 7 06:00:43.234068 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:00:43.297029 sshd[1735]: Connection closed by 10.0.0.1 port 56460 Jul 7 06:00:43.297479 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:43.351189 systemd[1]: sshd@1-10.0.0.17:22-10.0.0.1:56460.service: Deactivated successfully. Jul 7 06:00:43.353757 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:00:43.354702 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:00:43.359082 systemd[1]: Started sshd@2-10.0.0.17:22-10.0.0.1:56466.service - OpenSSH per-connection server daemon (10.0.0.1:56466). Jul 7 06:00:43.359955 systemd-logind[1584]: Removed session 2. Jul 7 06:00:43.417991 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 56466 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:43.420018 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:43.426123 systemd-logind[1584]: New session 3 of user core. Jul 7 06:00:43.437136 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:00:43.512083 sshd[1743]: Connection closed by 10.0.0.1 port 56466 Jul 7 06:00:43.512551 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:43.527342 systemd[1]: sshd@2-10.0.0.17:22-10.0.0.1:56466.service: Deactivated successfully. Jul 7 06:00:43.530188 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:00:43.531204 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:00:43.535631 systemd[1]: Started sshd@3-10.0.0.17:22-10.0.0.1:56482.service - OpenSSH per-connection server daemon (10.0.0.1:56482). Jul 7 06:00:43.536831 systemd-logind[1584]: Removed session 3. Jul 7 06:00:43.588679 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 56482 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:43.591211 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:43.597705 systemd-logind[1584]: New session 4 of user core. Jul 7 06:00:43.607089 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:00:43.665026 sshd[1753]: Connection closed by 10.0.0.1 port 56482 Jul 7 06:00:43.665798 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:43.675805 systemd[1]: sshd@3-10.0.0.17:22-10.0.0.1:56482.service: Deactivated successfully. Jul 7 06:00:43.678668 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:00:43.679647 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:00:43.683224 systemd[1]: Started sshd@4-10.0.0.17:22-10.0.0.1:56488.service - OpenSSH per-connection server daemon (10.0.0.1:56488). Jul 7 06:00:43.684673 systemd-logind[1584]: Removed session 4. Jul 7 06:00:43.742118 kubelet[1718]: E0707 06:00:43.742022 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:00:43.742561 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 56488 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:43.744590 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:43.747830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:00:43.748163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:00:43.748667 systemd[1]: kubelet.service: Consumed 2.402s CPU time, 266M memory peak. Jul 7 06:00:43.752844 systemd-logind[1584]: New session 5 of user core. Jul 7 06:00:43.771199 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:00:43.833692 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:00:43.834092 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:00:43.854123 sudo[1763]: pam_unix(sudo:session): session closed for user root Jul 7 06:00:43.856123 sshd[1762]: Connection closed by 10.0.0.1 port 56488 Jul 7 06:00:43.856575 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:43.870723 systemd[1]: sshd@4-10.0.0.17:22-10.0.0.1:56488.service: Deactivated successfully. Jul 7 06:00:43.873001 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:00:43.873887 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:00:43.877773 systemd[1]: Started sshd@5-10.0.0.17:22-10.0.0.1:56502.service - OpenSSH per-connection server daemon (10.0.0.1:56502). Jul 7 06:00:43.878686 systemd-logind[1584]: Removed session 5. Jul 7 06:00:43.959162 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 56502 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:43.961301 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:43.966825 systemd-logind[1584]: New session 6 of user core. Jul 7 06:00:43.976108 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:00:44.033729 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:00:44.034126 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:00:44.071088 sudo[1773]: pam_unix(sudo:session): session closed for user root Jul 7 06:00:44.078394 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:00:44.078730 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:00:44.091223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:00:44.140937 augenrules[1795]: No rules Jul 7 06:00:44.142653 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:00:44.142993 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:00:44.144249 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 7 06:00:44.146000 sshd[1771]: Connection closed by 10.0.0.1 port 56502 Jul 7 06:00:44.146342 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jul 7 06:00:44.154348 systemd[1]: sshd@5-10.0.0.17:22-10.0.0.1:56502.service: Deactivated successfully. Jul 7 06:00:44.156065 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:00:44.156813 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:00:44.159382 systemd[1]: Started sshd@6-10.0.0.17:22-10.0.0.1:56504.service - OpenSSH per-connection server daemon (10.0.0.1:56504). Jul 7 06:00:44.160177 systemd-logind[1584]: Removed session 6. Jul 7 06:00:44.215009 sshd[1804]: Accepted publickey for core from 10.0.0.1 port 56504 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:00:44.217035 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:00:44.222292 systemd-logind[1584]: New session 7 of user core. Jul 7 06:00:44.233062 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:00:44.286687 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:00:44.287027 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:00:45.006841 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:00:45.032295 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:00:45.599652 dockerd[1828]: time="2025-07-07T06:00:45.599567125Z" level=info msg="Starting up" Jul 7 06:00:45.601003 dockerd[1828]: time="2025-07-07T06:00:45.600975877Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:00:46.083943 dockerd[1828]: time="2025-07-07T06:00:46.083800956Z" level=info msg="Loading containers: start." Jul 7 06:00:46.095944 kernel: Initializing XFRM netlink socket Jul 7 06:00:46.358496 systemd-networkd[1504]: docker0: Link UP Jul 7 06:00:46.363379 dockerd[1828]: time="2025-07-07T06:00:46.363335573Z" level=info msg="Loading containers: done." Jul 7 06:00:46.383530 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2460161779-merged.mount: Deactivated successfully. Jul 7 06:00:46.385621 dockerd[1828]: time="2025-07-07T06:00:46.385573778Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:00:46.385700 dockerd[1828]: time="2025-07-07T06:00:46.385682612Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:00:46.385837 dockerd[1828]: time="2025-07-07T06:00:46.385816813Z" level=info msg="Initializing buildkit" Jul 7 06:00:46.416196 dockerd[1828]: time="2025-07-07T06:00:46.416154838Z" level=info msg="Completed buildkit initialization" Jul 7 06:00:46.422190 dockerd[1828]: time="2025-07-07T06:00:46.422141926Z" level=info msg="Daemon has completed initialization" Jul 7 06:00:46.422294 dockerd[1828]: time="2025-07-07T06:00:46.422237545Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:00:46.422461 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:00:47.555315 containerd[1605]: time="2025-07-07T06:00:47.555266188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:00:48.124979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285020392.mount: Deactivated successfully. Jul 7 06:00:50.344342 containerd[1605]: time="2025-07-07T06:00:50.344270277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:50.345124 containerd[1605]: time="2025-07-07T06:00:50.345062242Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 7 06:00:50.346792 containerd[1605]: time="2025-07-07T06:00:50.346762531Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:50.349706 containerd[1605]: time="2025-07-07T06:00:50.349676696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:50.350874 containerd[1605]: time="2025-07-07T06:00:50.350818928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.795506713s" Jul 7 06:00:50.350956 containerd[1605]: time="2025-07-07T06:00:50.350884972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 06:00:50.351848 containerd[1605]: time="2025-07-07T06:00:50.351828371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:00:51.801296 containerd[1605]: time="2025-07-07T06:00:51.801190242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:51.802280 containerd[1605]: time="2025-07-07T06:00:51.802191129Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 7 06:00:51.803672 containerd[1605]: time="2025-07-07T06:00:51.803605621Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:51.806894 containerd[1605]: time="2025-07-07T06:00:51.806862058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:51.807867 containerd[1605]: time="2025-07-07T06:00:51.807831927Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.455973871s" Jul 7 06:00:51.807867 containerd[1605]: time="2025-07-07T06:00:51.807862595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 06:00:51.808536 containerd[1605]: time="2025-07-07T06:00:51.808477758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:00:53.293134 containerd[1605]: time="2025-07-07T06:00:53.293033220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:53.293797 containerd[1605]: time="2025-07-07T06:00:53.293766405Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 7 06:00:53.295690 containerd[1605]: time="2025-07-07T06:00:53.295636201Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:53.298886 containerd[1605]: time="2025-07-07T06:00:53.298823859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:53.299742 containerd[1605]: time="2025-07-07T06:00:53.299688140Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.491167692s" Jul 7 06:00:53.299742 containerd[1605]: time="2025-07-07T06:00:53.299728295Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 06:00:53.300392 containerd[1605]: time="2025-07-07T06:00:53.300354580Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:00:53.999369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:00:54.003187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:00:54.462473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:00:54.483156 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:00:54.718062 kubelet[2113]: E0707 06:00:54.717744 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:00:54.726043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:00:54.726546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:00:54.727149 systemd[1]: kubelet.service: Consumed 612ms CPU time, 110.5M memory peak. Jul 7 06:00:54.771767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276107959.mount: Deactivated successfully. Jul 7 06:00:55.811460 containerd[1605]: time="2025-07-07T06:00:55.811364850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:55.812713 containerd[1605]: time="2025-07-07T06:00:55.812631606Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 7 06:00:55.814042 containerd[1605]: time="2025-07-07T06:00:55.813967281Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:55.816273 containerd[1605]: time="2025-07-07T06:00:55.816204286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:55.817180 containerd[1605]: time="2025-07-07T06:00:55.817102881Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.516709168s" Jul 7 06:00:55.817180 containerd[1605]: time="2025-07-07T06:00:55.817142766Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 06:00:55.818162 containerd[1605]: time="2025-07-07T06:00:55.818060457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:00:56.313106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522831959.mount: Deactivated successfully. Jul 7 06:00:57.872659 containerd[1605]: time="2025-07-07T06:00:57.872589392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:57.874153 containerd[1605]: time="2025-07-07T06:00:57.874108351Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:00:57.877104 containerd[1605]: time="2025-07-07T06:00:57.877053284Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:57.879936 containerd[1605]: time="2025-07-07T06:00:57.879892288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:00:57.881006 containerd[1605]: time="2025-07-07T06:00:57.880956113Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.062781662s" Jul 7 06:00:57.881006 containerd[1605]: time="2025-07-07T06:00:57.880999484Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:00:57.881618 containerd[1605]: time="2025-07-07T06:00:57.881571347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:00:58.316689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661130662.mount: Deactivated successfully. Jul 7 06:00:58.327567 containerd[1605]: time="2025-07-07T06:00:58.327451732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:00:58.328473 containerd[1605]: time="2025-07-07T06:00:58.328435957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:00:58.329672 containerd[1605]: time="2025-07-07T06:00:58.329633593Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:00:58.331863 containerd[1605]: time="2025-07-07T06:00:58.331795498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:00:58.332502 containerd[1605]: time="2025-07-07T06:00:58.332447611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 450.845707ms" Jul 7 06:00:58.332502 containerd[1605]: time="2025-07-07T06:00:58.332481123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:00:58.333225 containerd[1605]: time="2025-07-07T06:00:58.333148605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:00:58.873693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247621603.mount: Deactivated successfully. Jul 7 06:01:00.561717 containerd[1605]: time="2025-07-07T06:01:00.561644230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:00.562886 containerd[1605]: time="2025-07-07T06:01:00.562833150Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 7 06:01:00.564783 containerd[1605]: time="2025-07-07T06:01:00.564743222Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:00.575075 containerd[1605]: time="2025-07-07T06:01:00.575019223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:00.578175 containerd[1605]: time="2025-07-07T06:01:00.576803830Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.243596053s" Jul 7 06:01:00.578175 containerd[1605]: time="2025-07-07T06:01:00.576847021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 06:01:03.384043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:03.384270 systemd[1]: kubelet.service: Consumed 612ms CPU time, 110.5M memory peak. Jul 7 06:01:03.387260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:03.415472 systemd[1]: Reload requested from client PID 2264 ('systemctl') (unit session-7.scope)... Jul 7 06:01:03.415489 systemd[1]: Reloading... Jul 7 06:01:03.503034 zram_generator::config[2303]: No configuration found. Jul 7 06:01:03.718451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:03.846533 systemd[1]: Reloading finished in 430 ms. Jul 7 06:01:03.925657 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:01:03.925766 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:01:03.926154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:03.926207 systemd[1]: kubelet.service: Consumed 189ms CPU time, 98.2M memory peak. Jul 7 06:01:03.928053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:04.150359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:04.157947 (kubelet)[2354]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:01:04.197393 kubelet[2354]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:04.197393 kubelet[2354]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:01:04.197393 kubelet[2354]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:04.197835 kubelet[2354]: I0707 06:01:04.197480 2354 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:01:04.483404 kubelet[2354]: I0707 06:01:04.483235 2354 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:01:04.483404 kubelet[2354]: I0707 06:01:04.483277 2354 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:01:04.483598 kubelet[2354]: I0707 06:01:04.483571 2354 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:01:04.515869 kubelet[2354]: E0707 06:01:04.515806 2354 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:04.517197 kubelet[2354]: I0707 06:01:04.517123 2354 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:01:04.528877 kubelet[2354]: I0707 06:01:04.528832 2354 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:01:04.535547 kubelet[2354]: I0707 06:01:04.535503 2354 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:01:04.535633 kubelet[2354]: I0707 06:01:04.535619 2354 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:01:04.535832 kubelet[2354]: I0707 06:01:04.535763 2354 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:01:04.536070 kubelet[2354]: I0707 06:01:04.535818 2354 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:01:04.536228 kubelet[2354]: I0707 06:01:04.536076 2354 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:01:04.536228 kubelet[2354]: I0707 06:01:04.536086 2354 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:01:04.536228 kubelet[2354]: I0707 06:01:04.536217 2354 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:04.539146 kubelet[2354]: I0707 06:01:04.539109 2354 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:01:04.539146 kubelet[2354]: I0707 06:01:04.539146 2354 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:01:04.539221 kubelet[2354]: I0707 06:01:04.539194 2354 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:01:04.539246 kubelet[2354]: I0707 06:01:04.539237 2354 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:01:04.541699 kubelet[2354]: W0707 06:01:04.541534 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:04.541699 kubelet[2354]: E0707 06:01:04.541628 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:04.542563 kubelet[2354]: I0707 06:01:04.542492 2354 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:01:04.543254 kubelet[2354]: I0707 06:01:04.543229 2354 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:01:04.543395 kubelet[2354]: W0707 06:01:04.543368 2354 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:01:04.543424 kubelet[2354]: W0707 06:01:04.543282 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:04.543514 kubelet[2354]: E0707 06:01:04.543452 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:04.545974 kubelet[2354]: I0707 06:01:04.545940 2354 server.go:1274] "Started kubelet" Jul 7 06:01:04.546276 kubelet[2354]: I0707 06:01:04.546253 2354 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:01:04.546515 kubelet[2354]: I0707 06:01:04.546253 2354 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:01:04.546855 kubelet[2354]: I0707 06:01:04.546830 2354 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:01:04.547490 kubelet[2354]: I0707 06:01:04.547471 2354 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:01:04.548631 kubelet[2354]: I0707 06:01:04.548597 2354 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:01:04.549357 kubelet[2354]: I0707 06:01:04.549327 2354 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:01:04.558583 kubelet[2354]: I0707 06:01:04.557816 2354 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:01:04.558583 kubelet[2354]: I0707 06:01:04.558246 2354 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:01:04.558583 kubelet[2354]: I0707 06:01:04.558474 2354 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:01:04.558818 kubelet[2354]: I0707 06:01:04.558727 2354 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:01:04.559115 kubelet[2354]: I0707 06:01:04.559016 2354 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:01:04.559115 kubelet[2354]: E0707 06:01:04.559072 2354 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:01:04.559530 kubelet[2354]: W0707 06:01:04.559474 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:04.559610 kubelet[2354]: E0707 06:01:04.559581 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:04.561410 kubelet[2354]: E0707 06:01:04.559669 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="200ms" Jul 7 06:01:04.561410 kubelet[2354]: E0707 06:01:04.559679 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:04.561410 kubelet[2354]: E0707 06:01:04.557220 2354 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.17:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.17:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2b9109bcbc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:01:04.545885124 +0000 UTC m=+0.383695708,LastTimestamp:2025-07-07 06:01:04.545885124 +0000 UTC m=+0.383695708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:01:04.561410 kubelet[2354]: I0707 06:01:04.561175 2354 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:01:04.576812 kubelet[2354]: I0707 06:01:04.576754 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:01:04.578284 kubelet[2354]: I0707 06:01:04.578259 2354 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:01:04.578419 kubelet[2354]: I0707 06:01:04.578393 2354 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:01:04.578419 kubelet[2354]: I0707 06:01:04.578324 2354 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:01:04.578419 kubelet[2354]: I0707 06:01:04.578437 2354 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:01:04.578586 kubelet[2354]: I0707 06:01:04.578457 2354 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:04.578724 kubelet[2354]: I0707 06:01:04.578694 2354 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:01:04.578812 kubelet[2354]: E0707 06:01:04.578769 2354 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:01:04.580733 kubelet[2354]: W0707 06:01:04.580591 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:04.580733 kubelet[2354]: E0707 06:01:04.580664 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:04.660056 kubelet[2354]: E0707 06:01:04.659988 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:04.679554 kubelet[2354]: E0707 06:01:04.679514 2354 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:01:04.760594 kubelet[2354]: E0707 06:01:04.760125 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:04.760594 kubelet[2354]: E0707 06:01:04.760522 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="400ms" Jul 7 06:01:04.861082 kubelet[2354]: E0707 06:01:04.860989 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:04.880255 kubelet[2354]: E0707 06:01:04.880181 2354 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:01:04.961941 kubelet[2354]: E0707 06:01:04.961838 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:05.059399 kubelet[2354]: I0707 06:01:05.059330 2354 policy_none.go:49] "None policy: Start" Jul 7 06:01:05.060360 kubelet[2354]: I0707 06:01:05.060329 2354 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:01:05.060433 kubelet[2354]: I0707 06:01:05.060366 2354 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:01:05.062692 kubelet[2354]: E0707 06:01:05.062641 2354 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:01:05.075363 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:01:05.090257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:01:05.094566 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:01:05.109046 kubelet[2354]: I0707 06:01:05.108591 2354 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:01:05.109046 kubelet[2354]: I0707 06:01:05.108978 2354 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:01:05.109046 kubelet[2354]: I0707 06:01:05.108997 2354 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:01:05.109483 kubelet[2354]: I0707 06:01:05.109372 2354 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:01:05.111064 kubelet[2354]: E0707 06:01:05.111023 2354 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:01:05.161794 kubelet[2354]: E0707 06:01:05.161527 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="800ms" Jul 7 06:01:05.211251 kubelet[2354]: I0707 06:01:05.211202 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:01:05.211790 kubelet[2354]: E0707 06:01:05.211711 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 7 06:01:05.291594 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 06:01:05.317257 systemd[1]: Created slice kubepods-burstable-pod2ce3f5fcf935689bbbdfa054b5077e10.slice - libcontainer container kubepods-burstable-pod2ce3f5fcf935689bbbdfa054b5077e10.slice. Jul 7 06:01:05.332135 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 06:01:05.362735 kubelet[2354]: I0707 06:01:05.362662 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:05.362735 kubelet[2354]: I0707 06:01:05.362729 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:05.362735 kubelet[2354]: I0707 06:01:05.362757 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:05.363046 kubelet[2354]: I0707 06:01:05.362792 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:05.363046 kubelet[2354]: I0707 06:01:05.362811 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:05.363046 kubelet[2354]: I0707 06:01:05.362825 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:05.363046 kubelet[2354]: I0707 06:01:05.362839 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:05.363046 kubelet[2354]: I0707 06:01:05.362858 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:05.363160 kubelet[2354]: I0707 06:01:05.362877 2354 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:05.414267 kubelet[2354]: I0707 06:01:05.414226 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:01:05.414726 kubelet[2354]: E0707 06:01:05.414668 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 7 06:01:05.616490 kubelet[2354]: E0707 06:01:05.616316 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:05.617307 containerd[1605]: time="2025-07-07T06:01:05.617266867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:05.630717 kubelet[2354]: E0707 06:01:05.630670 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:05.631283 containerd[1605]: time="2025-07-07T06:01:05.631229452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ce3f5fcf935689bbbdfa054b5077e10,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:05.634831 kubelet[2354]: E0707 06:01:05.634777 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:05.635331 containerd[1605]: time="2025-07-07T06:01:05.635287041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:05.750787 kubelet[2354]: W0707 06:01:05.750671 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:05.750787 kubelet[2354]: E0707 06:01:05.750790 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:05.817124 kubelet[2354]: I0707 06:01:05.817078 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:01:05.817531 kubelet[2354]: E0707 06:01:05.817480 2354 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.17:6443/api/v1/nodes\": dial tcp 10.0.0.17:6443: connect: connection refused" node="localhost" Jul 7 06:01:05.821879 kubelet[2354]: W0707 06:01:05.821816 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:05.821879 kubelet[2354]: E0707 06:01:05.821855 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:05.886228 kubelet[2354]: W0707 06:01:05.886085 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:05.886228 kubelet[2354]: E0707 06:01:05.886148 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:05.963291 kubelet[2354]: E0707 06:01:05.963232 2354 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.17:6443: connect: connection refused" interval="1.6s" Jul 7 06:01:06.018344 kubelet[2354]: W0707 06:01:06.018251 2354 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.17:6443: connect: connection refused Jul 7 06:01:06.018344 kubelet[2354]: E0707 06:01:06.018343 2354 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.17:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:01:06.071778 containerd[1605]: time="2025-07-07T06:01:06.070482658Z" level=info msg="connecting to shim 6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b" address="unix:///run/containerd/s/bb6120d30900f321a2b604e414466380f94c43fca0bcbf2f988d2f427b1fd52d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:06.080042 containerd[1605]: time="2025-07-07T06:01:06.079968858Z" level=info msg="connecting to shim f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33" address="unix:///run/containerd/s/0d59d9c5b11be49270a76788507f3b9ccafff34e93f50e438cbcfa60aef3e843" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:06.081980 containerd[1605]: time="2025-07-07T06:01:06.081943792Z" level=info msg="connecting to shim 5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51" address="unix:///run/containerd/s/e1e19c08571cceb0cd2ae03e9aa085ca38fb3d128e316652a3d66d55a8871f4c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:06.144535 systemd[1]: Started cri-containerd-6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b.scope - libcontainer container 6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b. Jul 7 06:01:06.147276 systemd[1]: Started cri-containerd-f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33.scope - libcontainer container f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33. Jul 7 06:01:06.210374 systemd[1]: Started cri-containerd-5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51.scope - libcontainer container 5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51. Jul 7 06:01:06.302133 containerd[1605]: time="2025-07-07T06:01:06.302068424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ce3f5fcf935689bbbdfa054b5077e10,Namespace:kube-system,Attempt:0,} returns sandbox id \"5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51\"" Jul 7 06:01:06.303170 kubelet[2354]: E0707 06:01:06.303141 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.304295 containerd[1605]: time="2025-07-07T06:01:06.304258131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33\"" Jul 7 06:01:06.304889 kubelet[2354]: E0707 06:01:06.304834 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.308670 containerd[1605]: time="2025-07-07T06:01:06.308617817Z" level=info msg="CreateContainer within sandbox \"5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:01:06.308816 containerd[1605]: time="2025-07-07T06:01:06.308787776Z" level=info msg="CreateContainer within sandbox \"f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:01:06.322620 containerd[1605]: time="2025-07-07T06:01:06.322534315Z" level=info msg="Container 38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:06.325505 containerd[1605]: time="2025-07-07T06:01:06.325439554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b\"" Jul 7 06:01:06.326747 kubelet[2354]: E0707 06:01:06.326359 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.328426 containerd[1605]: time="2025-07-07T06:01:06.328400647Z" level=info msg="CreateContainer within sandbox \"6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:01:06.337531 containerd[1605]: time="2025-07-07T06:01:06.337472419Z" level=info msg="Container cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:06.344308 containerd[1605]: time="2025-07-07T06:01:06.344250090Z" level=info msg="CreateContainer within sandbox \"f85d8c3d98ecaa8d696366cfa09c29368077114aeb088b304d5a5dbed1dfeb33\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b\"" Jul 7 06:01:06.345114 containerd[1605]: time="2025-07-07T06:01:06.345072593Z" level=info msg="StartContainer for \"38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b\"" Jul 7 06:01:06.346561 containerd[1605]: time="2025-07-07T06:01:06.346515549Z" level=info msg="connecting to shim 38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b" address="unix:///run/containerd/s/0d59d9c5b11be49270a76788507f3b9ccafff34e93f50e438cbcfa60aef3e843" protocol=ttrpc version=3 Jul 7 06:01:06.350886 containerd[1605]: time="2025-07-07T06:01:06.349957884Z" level=info msg="CreateContainer within sandbox \"5688556bea901a59f3f5f370556847e1eba79b1f5528d69aba3d02c5fd125a51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d\"" Jul 7 06:01:06.350886 containerd[1605]: time="2025-07-07T06:01:06.350299014Z" level=info msg="Container 75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:06.351981 containerd[1605]: time="2025-07-07T06:01:06.351941494Z" level=info msg="StartContainer for \"cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d\"" Jul 7 06:01:06.353415 containerd[1605]: time="2025-07-07T06:01:06.353367899Z" level=info msg="connecting to shim cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d" address="unix:///run/containerd/s/e1e19c08571cceb0cd2ae03e9aa085ca38fb3d128e316652a3d66d55a8871f4c" protocol=ttrpc version=3 Jul 7 06:01:06.363077 containerd[1605]: time="2025-07-07T06:01:06.363031402Z" level=info msg="CreateContainer within sandbox \"6beea5c6dbc8525591ef97d48e6c266cad030e3d85cec82c54f6bc0171a89b8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70\"" Jul 7 06:01:06.363730 containerd[1605]: time="2025-07-07T06:01:06.363688103Z" level=info msg="StartContainer for \"75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70\"" Jul 7 06:01:06.365477 containerd[1605]: time="2025-07-07T06:01:06.365402138Z" level=info msg="connecting to shim 75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70" address="unix:///run/containerd/s/bb6120d30900f321a2b604e414466380f94c43fca0bcbf2f988d2f427b1fd52d" protocol=ttrpc version=3 Jul 7 06:01:06.371842 systemd[1]: Started cri-containerd-38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b.scope - libcontainer container 38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b. Jul 7 06:01:06.383122 systemd[1]: Started cri-containerd-cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d.scope - libcontainer container cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d. Jul 7 06:01:06.407115 systemd[1]: Started cri-containerd-75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70.scope - libcontainer container 75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70. Jul 7 06:01:06.446160 containerd[1605]: time="2025-07-07T06:01:06.446094044Z" level=info msg="StartContainer for \"38751390557f315bc63b8c86b1f5412d3b6d8a74a6d190e188d0953f90880f9b\" returns successfully" Jul 7 06:01:06.466375 containerd[1605]: time="2025-07-07T06:01:06.466303794Z" level=info msg="StartContainer for \"cf0870b7b86d8eefac876099499f77c578ff0aec5e5e0799e81aa6eecccb1b8d\" returns successfully" Jul 7 06:01:06.530183 containerd[1605]: time="2025-07-07T06:01:06.530120804Z" level=info msg="StartContainer for \"75083685692c11d49cef2285e11ae231da1d6a09f80bebb535b5be836b08af70\" returns successfully" Jul 7 06:01:06.595859 kubelet[2354]: E0707 06:01:06.595804 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.597492 kubelet[2354]: E0707 06:01:06.597462 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.601152 kubelet[2354]: E0707 06:01:06.601102 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:06.619953 kubelet[2354]: I0707 06:01:06.619893 2354 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:01:07.603236 kubelet[2354]: E0707 06:01:07.603191 2354 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:08.117367 kubelet[2354]: E0707 06:01:08.117309 2354 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:01:08.243952 kubelet[2354]: E0707 06:01:08.243715 2354 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fe2b9109bcbc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:01:04.545885124 +0000 UTC m=+0.383695708,LastTimestamp:2025-07-07 06:01:04.545885124 +0000 UTC m=+0.383695708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:01:08.299421 kubelet[2354]: I0707 06:01:08.299347 2354 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:01:08.299421 kubelet[2354]: E0707 06:01:08.299396 2354 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:01:08.542193 kubelet[2354]: I0707 06:01:08.542102 2354 apiserver.go:52] "Watching apiserver" Jul 7 06:01:08.558543 kubelet[2354]: I0707 06:01:08.558463 2354 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:01:11.111691 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Jul 7 06:01:11.111714 systemd[1]: Reloading... Jul 7 06:01:11.216977 zram_generator::config[2672]: No configuration found. Jul 7 06:01:11.409627 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:01:11.558734 systemd[1]: Reloading finished in 446 ms. Jul 7 06:01:11.585830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:11.611005 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:01:11.611393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:11.611456 systemd[1]: kubelet.service: Consumed 1.007s CPU time, 130M memory peak. Jul 7 06:01:11.613797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:01:11.894218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:01:11.910462 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:01:12.018678 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:12.018678 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:01:12.018678 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:01:12.019214 kubelet[2717]: I0707 06:01:12.018721 2717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:01:12.026577 kubelet[2717]: I0707 06:01:12.026510 2717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:01:12.026577 kubelet[2717]: I0707 06:01:12.026550 2717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:01:12.026856 kubelet[2717]: I0707 06:01:12.026830 2717 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:01:12.028665 kubelet[2717]: I0707 06:01:12.028631 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:01:12.030680 kubelet[2717]: I0707 06:01:12.030590 2717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:01:12.044945 kubelet[2717]: I0707 06:01:12.043563 2717 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:01:12.049629 kubelet[2717]: I0707 06:01:12.049581 2717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:01:12.050169 kubelet[2717]: I0707 06:01:12.050131 2717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:01:12.050759 kubelet[2717]: I0707 06:01:12.050687 2717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:01:12.050985 kubelet[2717]: I0707 06:01:12.050738 2717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:01:12.051102 kubelet[2717]: I0707 06:01:12.050995 2717 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:01:12.051102 kubelet[2717]: I0707 06:01:12.051006 2717 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:01:12.051102 kubelet[2717]: I0707 06:01:12.051038 2717 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:12.051200 kubelet[2717]: I0707 06:01:12.051157 2717 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:01:12.051200 kubelet[2717]: I0707 06:01:12.051170 2717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:01:12.051254 kubelet[2717]: I0707 06:01:12.051205 2717 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:01:12.051254 kubelet[2717]: I0707 06:01:12.051217 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:01:12.052998 kubelet[2717]: I0707 06:01:12.052972 2717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:01:12.053380 kubelet[2717]: I0707 06:01:12.053361 2717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:01:12.053846 kubelet[2717]: I0707 06:01:12.053814 2717 server.go:1274] "Started kubelet" Jul 7 06:01:12.055210 kubelet[2717]: I0707 06:01:12.055165 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:01:12.057505 kubelet[2717]: I0707 06:01:12.057426 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:01:12.058883 kubelet[2717]: I0707 06:01:12.058836 2717 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:01:12.059945 kubelet[2717]: I0707 06:01:12.059685 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:01:12.061460 kubelet[2717]: I0707 06:01:12.060893 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:01:12.061460 kubelet[2717]: I0707 06:01:12.061318 2717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:01:12.062889 kubelet[2717]: I0707 06:01:12.062869 2717 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:01:12.063026 kubelet[2717]: I0707 06:01:12.063013 2717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:01:12.063203 kubelet[2717]: I0707 06:01:12.063170 2717 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:01:12.064799 kubelet[2717]: I0707 06:01:12.064618 2717 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:01:12.064799 kubelet[2717]: I0707 06:01:12.064797 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:01:12.067425 kubelet[2717]: E0707 06:01:12.067255 2717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:01:12.069187 kubelet[2717]: I0707 06:01:12.069154 2717 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:01:12.082967 kubelet[2717]: I0707 06:01:12.082756 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:01:12.084408 kubelet[2717]: I0707 06:01:12.084381 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:01:12.084797 kubelet[2717]: I0707 06:01:12.084639 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:01:12.084797 kubelet[2717]: I0707 06:01:12.084687 2717 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:01:12.084797 kubelet[2717]: E0707 06:01:12.084742 2717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:01:12.120558 kubelet[2717]: I0707 06:01:12.120505 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:01:12.120558 kubelet[2717]: I0707 06:01:12.120531 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:01:12.120558 kubelet[2717]: I0707 06:01:12.120552 2717 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:01:12.120757 kubelet[2717]: I0707 06:01:12.120717 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:01:12.120757 kubelet[2717]: I0707 06:01:12.120728 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:01:12.120757 kubelet[2717]: I0707 06:01:12.120747 2717 policy_none.go:49] "None policy: Start" Jul 7 06:01:12.121487 kubelet[2717]: I0707 06:01:12.121417 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:01:12.121487 kubelet[2717]: I0707 06:01:12.121441 2717 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:01:12.121791 kubelet[2717]: I0707 06:01:12.121708 2717 state_mem.go:75] "Updated machine memory state" Jul 7 06:01:12.129379 kubelet[2717]: I0707 06:01:12.129209 2717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:01:12.129890 kubelet[2717]: I0707 06:01:12.129853 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:01:12.129995 kubelet[2717]: I0707 06:01:12.129875 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:01:12.130278 kubelet[2717]: I0707 06:01:12.130152 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:01:12.244879 kubelet[2717]: I0707 06:01:12.244678 2717 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:01:12.254598 kubelet[2717]: I0707 06:01:12.254549 2717 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 06:01:12.254748 kubelet[2717]: I0707 06:01:12.254671 2717 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:01:12.365023 kubelet[2717]: I0707 06:01:12.364954 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:12.365023 kubelet[2717]: I0707 06:01:12.365012 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:12.365277 kubelet[2717]: I0707 06:01:12.365093 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:01:12.365277 kubelet[2717]: I0707 06:01:12.365204 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:12.365353 kubelet[2717]: I0707 06:01:12.365290 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:12.365353 kubelet[2717]: I0707 06:01:12.365315 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:12.365353 kubelet[2717]: I0707 06:01:12.365343 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:12.365449 kubelet[2717]: I0707 06:01:12.365360 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:12.365449 kubelet[2717]: I0707 06:01:12.365378 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ce3f5fcf935689bbbdfa054b5077e10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ce3f5fcf935689bbbdfa054b5077e10\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:01:12.501520 kubelet[2717]: E0707 06:01:12.501183 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:12.501520 kubelet[2717]: E0707 06:01:12.501183 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:12.503252 kubelet[2717]: E0707 06:01:12.503228 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:13.052111 kubelet[2717]: I0707 06:01:13.052040 2717 apiserver.go:52] "Watching apiserver" Jul 7 06:01:13.063582 kubelet[2717]: I0707 06:01:13.063531 2717 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:01:13.104426 kubelet[2717]: E0707 06:01:13.103956 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:13.104697 kubelet[2717]: E0707 06:01:13.104655 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:13.109952 kubelet[2717]: E0707 06:01:13.109085 2717 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:01:13.109952 kubelet[2717]: E0707 06:01:13.109327 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:13.126254 kubelet[2717]: I0707 06:01:13.126174 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.126148703 podStartE2EDuration="1.126148703s" podCreationTimestamp="2025-07-07 06:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:13.125521568 +0000 UTC m=+1.206769578" watchObservedRunningTime="2025-07-07 06:01:13.126148703 +0000 UTC m=+1.207396703" Jul 7 06:01:13.135014 kubelet[2717]: I0707 06:01:13.134908 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.134889526 podStartE2EDuration="1.134889526s" podCreationTimestamp="2025-07-07 06:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:13.134759627 +0000 UTC m=+1.216007627" watchObservedRunningTime="2025-07-07 06:01:13.134889526 +0000 UTC m=+1.216137526" Jul 7 06:01:13.159879 kubelet[2717]: I0707 06:01:13.159806 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.159781713 podStartE2EDuration="1.159781713s" podCreationTimestamp="2025-07-07 06:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:13.14561331 +0000 UTC m=+1.226861310" watchObservedRunningTime="2025-07-07 06:01:13.159781713 +0000 UTC m=+1.241029713" Jul 7 06:01:14.105801 kubelet[2717]: E0707 06:01:14.105749 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:14.106356 kubelet[2717]: E0707 06:01:14.105811 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:15.107493 kubelet[2717]: E0707 06:01:15.107435 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:16.678584 kubelet[2717]: I0707 06:01:16.678549 2717 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:01:16.679456 kubelet[2717]: I0707 06:01:16.679114 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:01:16.679508 containerd[1605]: time="2025-07-07T06:01:16.678822287Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:01:16.962344 kubelet[2717]: E0707 06:01:16.962193 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:17.111178 kubelet[2717]: E0707 06:01:17.111136 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:17.841409 systemd[1]: Created slice kubepods-besteffort-podc1081ff0_d064_4759_9f06_44d5f40f77a8.slice - libcontainer container kubepods-besteffort-podc1081ff0_d064_4759_9f06_44d5f40f77a8.slice. Jul 7 06:01:17.859674 systemd[1]: Created slice kubepods-besteffort-pod1b586c50_39e8_4924_8666_99f084dc8989.slice - libcontainer container kubepods-besteffort-pod1b586c50_39e8_4924_8666_99f084dc8989.slice. Jul 7 06:01:18.004898 kubelet[2717]: I0707 06:01:18.004764 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b586c50-39e8-4924-8666-99f084dc8989-lib-modules\") pod \"kube-proxy-h7xfj\" (UID: \"1b586c50-39e8-4924-8666-99f084dc8989\") " pod="kube-system/kube-proxy-h7xfj" Jul 7 06:01:18.004898 kubelet[2717]: I0707 06:01:18.004853 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvl8\" (UniqueName: \"kubernetes.io/projected/c1081ff0-d064-4759-9f06-44d5f40f77a8-kube-api-access-lnvl8\") pod \"tigera-operator-5bf8dfcb4-cpqxr\" (UID: \"c1081ff0-d064-4759-9f06-44d5f40f77a8\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cpqxr" Jul 7 06:01:18.004898 kubelet[2717]: I0707 06:01:18.004885 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmfp\" (UniqueName: \"kubernetes.io/projected/1b586c50-39e8-4924-8666-99f084dc8989-kube-api-access-htmfp\") pod \"kube-proxy-h7xfj\" (UID: \"1b586c50-39e8-4924-8666-99f084dc8989\") " pod="kube-system/kube-proxy-h7xfj" Jul 7 06:01:18.005567 kubelet[2717]: I0707 06:01:18.004907 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c1081ff0-d064-4759-9f06-44d5f40f77a8-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-cpqxr\" (UID: \"c1081ff0-d064-4759-9f06-44d5f40f77a8\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cpqxr" Jul 7 06:01:18.005567 kubelet[2717]: I0707 06:01:18.005050 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b586c50-39e8-4924-8666-99f084dc8989-xtables-lock\") pod \"kube-proxy-h7xfj\" (UID: \"1b586c50-39e8-4924-8666-99f084dc8989\") " pod="kube-system/kube-proxy-h7xfj" Jul 7 06:01:18.005567 kubelet[2717]: I0707 06:01:18.005103 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b586c50-39e8-4924-8666-99f084dc8989-kube-proxy\") pod \"kube-proxy-h7xfj\" (UID: \"1b586c50-39e8-4924-8666-99f084dc8989\") " pod="kube-system/kube-proxy-h7xfj" Jul 7 06:01:18.155311 containerd[1605]: time="2025-07-07T06:01:18.155153833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cpqxr,Uid:c1081ff0-d064-4759-9f06-44d5f40f77a8,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:01:18.163096 kubelet[2717]: E0707 06:01:18.163069 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:18.163575 containerd[1605]: time="2025-07-07T06:01:18.163460608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7xfj,Uid:1b586c50-39e8-4924-8666-99f084dc8989,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:18.187723 containerd[1605]: time="2025-07-07T06:01:18.187653855Z" level=info msg="connecting to shim 63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db" address="unix:///run/containerd/s/da507c009fa4c7a7124b12ed6d92fa6b1d96bab9e0d8931103f90462aaf50063" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:18.207743 containerd[1605]: time="2025-07-07T06:01:18.207669788Z" level=info msg="connecting to shim 4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1" address="unix:///run/containerd/s/2932482a25b3d940eed839ee7001876cd4526127548832679d4fd030b2001f28" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:18.223124 systemd[1]: Started cri-containerd-63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db.scope - libcontainer container 63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db. Jul 7 06:01:18.235474 systemd[1]: Started cri-containerd-4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1.scope - libcontainer container 4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1. Jul 7 06:01:18.269367 containerd[1605]: time="2025-07-07T06:01:18.269315558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7xfj,Uid:1b586c50-39e8-4924-8666-99f084dc8989,Namespace:kube-system,Attempt:0,} returns sandbox id \"4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1\"" Jul 7 06:01:18.270206 kubelet[2717]: E0707 06:01:18.270158 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:18.276346 containerd[1605]: time="2025-07-07T06:01:18.276305810Z" level=info msg="CreateContainer within sandbox \"4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:01:18.286895 containerd[1605]: time="2025-07-07T06:01:18.286852681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cpqxr,Uid:c1081ff0-d064-4759-9f06-44d5f40f77a8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db\"" Jul 7 06:01:18.288615 containerd[1605]: time="2025-07-07T06:01:18.288578905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:01:18.293122 containerd[1605]: time="2025-07-07T06:01:18.293081278Z" level=info msg="Container 08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:18.304028 containerd[1605]: time="2025-07-07T06:01:18.303976584Z" level=info msg="CreateContainer within sandbox \"4798e3da3c87380c7744bcf6dda5c64d49ef9043547181f452a45fa1aa4019b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678\"" Jul 7 06:01:18.304602 containerd[1605]: time="2025-07-07T06:01:18.304502337Z" level=info msg="StartContainer for \"08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678\"" Jul 7 06:01:18.306206 containerd[1605]: time="2025-07-07T06:01:18.306183916Z" level=info msg="connecting to shim 08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678" address="unix:///run/containerd/s/2932482a25b3d940eed839ee7001876cd4526127548832679d4fd030b2001f28" protocol=ttrpc version=3 Jul 7 06:01:18.326085 systemd[1]: Started cri-containerd-08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678.scope - libcontainer container 08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678. Jul 7 06:01:18.389941 containerd[1605]: time="2025-07-07T06:01:18.389678979Z" level=info msg="StartContainer for \"08e6b14c72948822b15ab464f27c1a61d1d72cf9a23974170fc19ac72b17c678\" returns successfully" Jul 7 06:01:19.118459 kubelet[2717]: E0707 06:01:19.118417 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:19.130223 kubelet[2717]: I0707 06:01:19.130152 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7xfj" podStartSLOduration=2.130132759 podStartE2EDuration="2.130132759s" podCreationTimestamp="2025-07-07 06:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:01:19.129855171 +0000 UTC m=+7.211103171" watchObservedRunningTime="2025-07-07 06:01:19.130132759 +0000 UTC m=+7.211380760" Jul 7 06:01:19.524823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122399877.mount: Deactivated successfully. Jul 7 06:01:20.249751 containerd[1605]: time="2025-07-07T06:01:20.249661733Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:20.250695 containerd[1605]: time="2025-07-07T06:01:20.250654584Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:01:20.251930 containerd[1605]: time="2025-07-07T06:01:20.251867515Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:20.257630 containerd[1605]: time="2025-07-07T06:01:20.257558304Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:20.258324 containerd[1605]: time="2025-07-07T06:01:20.258282693Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.969668551s" Jul 7 06:01:20.258324 containerd[1605]: time="2025-07-07T06:01:20.258324803Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:01:20.260907 containerd[1605]: time="2025-07-07T06:01:20.260858709Z" level=info msg="CreateContainer within sandbox \"63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:01:20.272223 containerd[1605]: time="2025-07-07T06:01:20.272120368Z" level=info msg="Container 1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:20.280260 containerd[1605]: time="2025-07-07T06:01:20.280208874Z" level=info msg="CreateContainer within sandbox \"63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\"" Jul 7 06:01:20.280905 containerd[1605]: time="2025-07-07T06:01:20.280867108Z" level=info msg="StartContainer for \"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\"" Jul 7 06:01:20.282056 containerd[1605]: time="2025-07-07T06:01:20.282017439Z" level=info msg="connecting to shim 1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee" address="unix:///run/containerd/s/da507c009fa4c7a7124b12ed6d92fa6b1d96bab9e0d8931103f90462aaf50063" protocol=ttrpc version=3 Jul 7 06:01:20.346157 systemd[1]: Started cri-containerd-1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee.scope - libcontainer container 1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee. Jul 7 06:01:20.385868 containerd[1605]: time="2025-07-07T06:01:20.385810474Z" level=info msg="StartContainer for \"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\" returns successfully" Jul 7 06:01:21.135314 kubelet[2717]: I0707 06:01:21.135193 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-cpqxr" podStartSLOduration=2.164021889 podStartE2EDuration="4.135165581s" podCreationTimestamp="2025-07-07 06:01:17 +0000 UTC" firstStartedPulling="2025-07-07 06:01:18.288195574 +0000 UTC m=+6.369443564" lastFinishedPulling="2025-07-07 06:01:20.259339245 +0000 UTC m=+8.340587256" observedRunningTime="2025-07-07 06:01:21.135119683 +0000 UTC m=+9.216367693" watchObservedRunningTime="2025-07-07 06:01:21.135165581 +0000 UTC m=+9.216413581" Jul 7 06:01:22.460092 systemd[1]: cri-containerd-1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee.scope: Deactivated successfully. Jul 7 06:01:22.464392 containerd[1605]: time="2025-07-07T06:01:22.464250694Z" level=info msg="received exit event container_id:\"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\" id:\"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\" pid:3042 exit_status:1 exited_at:{seconds:1751868082 nanos:462439461}" Jul 7 06:01:22.464392 containerd[1605]: time="2025-07-07T06:01:22.464363288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\" id:\"1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee\" pid:3042 exit_status:1 exited_at:{seconds:1751868082 nanos:462439461}" Jul 7 06:01:22.495562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee-rootfs.mount: Deactivated successfully. Jul 7 06:01:22.691120 kubelet[2717]: E0707 06:01:22.691059 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:23.130247 kubelet[2717]: I0707 06:01:23.130191 2717 scope.go:117] "RemoveContainer" containerID="1fc450a8e323303f6ed619d38526a892b1476dce7ecee211655afa10aa92d9ee" Jul 7 06:01:23.134440 containerd[1605]: time="2025-07-07T06:01:23.133845821Z" level=info msg="CreateContainer within sandbox \"63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 06:01:23.226707 containerd[1605]: time="2025-07-07T06:01:23.226600575Z" level=info msg="Container e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:23.248324 kubelet[2717]: E0707 06:01:23.248269 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:23.333514 containerd[1605]: time="2025-07-07T06:01:23.333440364Z" level=info msg="CreateContainer within sandbox \"63e03ff6e7800c698e5a6ae4a8f6c3e8fc71062add38b829b9a28b1f8fa9e3db\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a\"" Jul 7 06:01:23.334271 containerd[1605]: time="2025-07-07T06:01:23.334239732Z" level=info msg="StartContainer for \"e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a\"" Jul 7 06:01:23.335379 containerd[1605]: time="2025-07-07T06:01:23.335355140Z" level=info msg="connecting to shim e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a" address="unix:///run/containerd/s/da507c009fa4c7a7124b12ed6d92fa6b1d96bab9e0d8931103f90462aaf50063" protocol=ttrpc version=3 Jul 7 06:01:23.362119 systemd[1]: Started cri-containerd-e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a.scope - libcontainer container e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a. Jul 7 06:01:23.397363 containerd[1605]: time="2025-07-07T06:01:23.397216755Z" level=info msg="StartContainer for \"e84ee9ac966926ea475cb373bf0a442bb8d7bf78a72bbbe16d698599be54545a\" returns successfully" Jul 7 06:01:24.134908 kubelet[2717]: E0707 06:01:24.134854 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:24.799517 update_engine[1586]: I20250707 06:01:24.799396 1586 update_attempter.cc:509] Updating boot flags... Jul 7 06:01:26.297226 sudo[1807]: pam_unix(sudo:session): session closed for user root Jul 7 06:01:26.299579 sshd[1806]: Connection closed by 10.0.0.1 port 56504 Jul 7 06:01:26.312679 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Jul 7 06:01:26.317474 systemd[1]: sshd@6-10.0.0.17:22-10.0.0.1:56504.service: Deactivated successfully. Jul 7 06:01:26.320464 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:01:26.320767 systemd[1]: session-7.scope: Consumed 5.550s CPU time, 225.5M memory peak. Jul 7 06:01:26.322432 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:01:26.324126 systemd-logind[1584]: Removed session 7. Jul 7 06:01:31.020238 systemd[1]: Created slice kubepods-besteffort-poda1a6012d_0d48_49a1_9588_ddc368d5d8c1.slice - libcontainer container kubepods-besteffort-poda1a6012d_0d48_49a1_9588_ddc368d5d8c1.slice. Jul 7 06:01:31.084726 kubelet[2717]: I0707 06:01:31.084644 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1a6012d-0d48-49a1-9588-ddc368d5d8c1-typha-certs\") pod \"calico-typha-7588df647b-9mhd8\" (UID: \"a1a6012d-0d48-49a1-9588-ddc368d5d8c1\") " pod="calico-system/calico-typha-7588df647b-9mhd8" Jul 7 06:01:31.084726 kubelet[2717]: I0707 06:01:31.084710 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a6012d-0d48-49a1-9588-ddc368d5d8c1-tigera-ca-bundle\") pod \"calico-typha-7588df647b-9mhd8\" (UID: \"a1a6012d-0d48-49a1-9588-ddc368d5d8c1\") " pod="calico-system/calico-typha-7588df647b-9mhd8" Jul 7 06:01:31.084726 kubelet[2717]: I0707 06:01:31.084734 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q6c4\" (UniqueName: \"kubernetes.io/projected/a1a6012d-0d48-49a1-9588-ddc368d5d8c1-kube-api-access-9q6c4\") pod \"calico-typha-7588df647b-9mhd8\" (UID: \"a1a6012d-0d48-49a1-9588-ddc368d5d8c1\") " pod="calico-system/calico-typha-7588df647b-9mhd8" Jul 7 06:01:31.324607 kubelet[2717]: E0707 06:01:31.324559 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:31.325272 containerd[1605]: time="2025-07-07T06:01:31.325227422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7588df647b-9mhd8,Uid:a1a6012d-0d48-49a1-9588-ddc368d5d8c1,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:31.379808 systemd[1]: Created slice kubepods-besteffort-podc48f67aa_df08_4b0d_8ecc_c3b7c31a690c.slice - libcontainer container kubepods-besteffort-podc48f67aa_df08_4b0d_8ecc_c3b7c31a690c.slice. Jul 7 06:01:31.388427 kubelet[2717]: I0707 06:01:31.388369 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-flexvol-driver-host\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.389041 kubelet[2717]: I0707 06:01:31.388991 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-cni-bin-dir\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.389304 kubelet[2717]: I0707 06:01:31.389244 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-cni-log-dir\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.389632 kubelet[2717]: I0707 06:01:31.389591 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgrng\" (UniqueName: \"kubernetes.io/projected/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-kube-api-access-mgrng\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390279 kubelet[2717]: I0707 06:01:31.390249 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-var-lib-calico\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390376 kubelet[2717]: I0707 06:01:31.390361 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-lib-modules\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390444 kubelet[2717]: I0707 06:01:31.390433 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-node-certs\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390506 kubelet[2717]: I0707 06:01:31.390495 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-policysync\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390572 kubelet[2717]: I0707 06:01:31.390555 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-var-run-calico\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390645 kubelet[2717]: I0707 06:01:31.390632 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-xtables-lock\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390774 kubelet[2717]: I0707 06:01:31.390760 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-tigera-ca-bundle\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.390861 kubelet[2717]: I0707 06:01:31.390842 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c48f67aa-df08-4b0d-8ecc-c3b7c31a690c-cni-net-dir\") pod \"calico-node-69spl\" (UID: \"c48f67aa-df08-4b0d-8ecc-c3b7c31a690c\") " pod="calico-system/calico-node-69spl" Jul 7 06:01:31.423043 containerd[1605]: time="2025-07-07T06:01:31.422958929Z" level=info msg="connecting to shim 622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571" address="unix:///run/containerd/s/750399779173452e6bb820915815ed01691c5d04d70b5ab701310a47420039bc" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:31.454134 systemd[1]: Started cri-containerd-622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571.scope - libcontainer container 622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571. Jul 7 06:01:31.514124 containerd[1605]: time="2025-07-07T06:01:31.514046461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7588df647b-9mhd8,Uid:a1a6012d-0d48-49a1-9588-ddc368d5d8c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571\"" Jul 7 06:01:31.520213 kubelet[2717]: E0707 06:01:31.520166 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:31.530817 containerd[1605]: time="2025-07-07T06:01:31.530765506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:01:31.556325 kubelet[2717]: E0707 06:01:31.556074 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:31.588461 kubelet[2717]: E0707 06:01:31.588333 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.588461 kubelet[2717]: W0707 06:01:31.588363 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.588461 kubelet[2717]: E0707 06:01:31.588395 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.588669 kubelet[2717]: E0707 06:01:31.588636 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.588669 kubelet[2717]: W0707 06:01:31.588661 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.588742 kubelet[2717]: E0707 06:01:31.588672 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.588906 kubelet[2717]: E0707 06:01:31.588883 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.588906 kubelet[2717]: W0707 06:01:31.588894 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.589020 kubelet[2717]: E0707 06:01:31.588953 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.589199 kubelet[2717]: E0707 06:01:31.589161 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.589199 kubelet[2717]: W0707 06:01:31.589195 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.589199 kubelet[2717]: E0707 06:01:31.589207 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.589436 kubelet[2717]: E0707 06:01:31.589409 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.589436 kubelet[2717]: W0707 06:01:31.589423 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.589436 kubelet[2717]: E0707 06:01:31.589433 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.589623 kubelet[2717]: E0707 06:01:31.589602 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.589623 kubelet[2717]: W0707 06:01:31.589614 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.589675 kubelet[2717]: E0707 06:01:31.589624 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.589819 kubelet[2717]: E0707 06:01:31.589799 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.589819 kubelet[2717]: W0707 06:01:31.589812 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.589866 kubelet[2717]: E0707 06:01:31.589825 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.590057 kubelet[2717]: E0707 06:01:31.590039 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.590057 kubelet[2717]: W0707 06:01:31.590051 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.590123 kubelet[2717]: E0707 06:01:31.590062 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.590279 kubelet[2717]: E0707 06:01:31.590260 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.590279 kubelet[2717]: W0707 06:01:31.590274 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.590387 kubelet[2717]: E0707 06:01:31.590288 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.590502 kubelet[2717]: E0707 06:01:31.590474 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.590502 kubelet[2717]: W0707 06:01:31.590496 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.590502 kubelet[2717]: E0707 06:01:31.590505 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.590728 kubelet[2717]: E0707 06:01:31.590704 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.590728 kubelet[2717]: W0707 06:01:31.590721 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.590806 kubelet[2717]: E0707 06:01:31.590735 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.590965 kubelet[2717]: E0707 06:01:31.590946 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.590965 kubelet[2717]: W0707 06:01:31.590959 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.591067 kubelet[2717]: E0707 06:01:31.590973 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.591196 kubelet[2717]: E0707 06:01:31.591178 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.591196 kubelet[2717]: W0707 06:01:31.591191 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.591285 kubelet[2717]: E0707 06:01:31.591215 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.591430 kubelet[2717]: E0707 06:01:31.591390 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.591430 kubelet[2717]: W0707 06:01:31.591402 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.591430 kubelet[2717]: E0707 06:01:31.591414 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.591633 kubelet[2717]: E0707 06:01:31.591579 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.591633 kubelet[2717]: W0707 06:01:31.591590 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.591633 kubelet[2717]: E0707 06:01:31.591601 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.591817 kubelet[2717]: E0707 06:01:31.591810 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.591853 kubelet[2717]: W0707 06:01:31.591820 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.591853 kubelet[2717]: E0707 06:01:31.591832 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.592087 kubelet[2717]: E0707 06:01:31.592066 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.592087 kubelet[2717]: W0707 06:01:31.592078 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.592162 kubelet[2717]: E0707 06:01:31.592090 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.592292 kubelet[2717]: E0707 06:01:31.592273 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.592292 kubelet[2717]: W0707 06:01:31.592285 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.592388 kubelet[2717]: E0707 06:01:31.592297 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.592486 kubelet[2717]: E0707 06:01:31.592469 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.592486 kubelet[2717]: W0707 06:01:31.592481 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.592588 kubelet[2717]: E0707 06:01:31.592492 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.592697 kubelet[2717]: E0707 06:01:31.592665 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.592697 kubelet[2717]: W0707 06:01:31.592677 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.592781 kubelet[2717]: E0707 06:01:31.592701 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.592988 kubelet[2717]: E0707 06:01:31.592969 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.592988 kubelet[2717]: W0707 06:01:31.592981 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.593073 kubelet[2717]: E0707 06:01:31.592992 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.593073 kubelet[2717]: I0707 06:01:31.593023 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/76bbbc29-f4c2-467c-acfb-9338c934762b-registration-dir\") pod \"csi-node-driver-q2hxc\" (UID: \"76bbbc29-f4c2-467c-acfb-9338c934762b\") " pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:31.593236 kubelet[2717]: E0707 06:01:31.593215 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.593236 kubelet[2717]: W0707 06:01:31.593228 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.593321 kubelet[2717]: E0707 06:01:31.593245 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.593321 kubelet[2717]: I0707 06:01:31.593263 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/76bbbc29-f4c2-467c-acfb-9338c934762b-socket-dir\") pod \"csi-node-driver-q2hxc\" (UID: \"76bbbc29-f4c2-467c-acfb-9338c934762b\") " pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:31.593718 kubelet[2717]: E0707 06:01:31.593696 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.593718 kubelet[2717]: W0707 06:01:31.593711 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.593806 kubelet[2717]: E0707 06:01:31.593728 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.593806 kubelet[2717]: I0707 06:01:31.593756 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c7dc\" (UniqueName: \"kubernetes.io/projected/76bbbc29-f4c2-467c-acfb-9338c934762b-kube-api-access-8c7dc\") pod \"csi-node-driver-q2hxc\" (UID: \"76bbbc29-f4c2-467c-acfb-9338c934762b\") " pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:31.594018 kubelet[2717]: E0707 06:01:31.593998 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.594018 kubelet[2717]: W0707 06:01:31.594011 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.594094 kubelet[2717]: E0707 06:01:31.594027 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.594094 kubelet[2717]: I0707 06:01:31.594046 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/76bbbc29-f4c2-467c-acfb-9338c934762b-varrun\") pod \"csi-node-driver-q2hxc\" (UID: \"76bbbc29-f4c2-467c-acfb-9338c934762b\") " pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:31.594306 kubelet[2717]: E0707 06:01:31.594286 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.594306 kubelet[2717]: W0707 06:01:31.594300 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.594389 kubelet[2717]: E0707 06:01:31.594316 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.594389 kubelet[2717]: I0707 06:01:31.594335 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76bbbc29-f4c2-467c-acfb-9338c934762b-kubelet-dir\") pod \"csi-node-driver-q2hxc\" (UID: \"76bbbc29-f4c2-467c-acfb-9338c934762b\") " pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:31.594537 kubelet[2717]: E0707 06:01:31.594518 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.594537 kubelet[2717]: W0707 06:01:31.594530 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.594611 kubelet[2717]: E0707 06:01:31.594545 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.594772 kubelet[2717]: E0707 06:01:31.594752 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.594772 kubelet[2717]: W0707 06:01:31.594767 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.594836 kubelet[2717]: E0707 06:01:31.594801 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.595032 kubelet[2717]: E0707 06:01:31.595017 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.595032 kubelet[2717]: W0707 06:01:31.595029 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.595116 kubelet[2717]: E0707 06:01:31.595063 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.595241 kubelet[2717]: E0707 06:01:31.595226 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.595241 kubelet[2717]: W0707 06:01:31.595238 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.595323 kubelet[2717]: E0707 06:01:31.595301 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.595454 kubelet[2717]: E0707 06:01:31.595440 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.595486 kubelet[2717]: W0707 06:01:31.595453 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.595516 kubelet[2717]: E0707 06:01:31.595493 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.595656 kubelet[2717]: E0707 06:01:31.595641 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.595656 kubelet[2717]: W0707 06:01:31.595653 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.595720 kubelet[2717]: E0707 06:01:31.595698 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.595878 kubelet[2717]: E0707 06:01:31.595862 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.595878 kubelet[2717]: W0707 06:01:31.595875 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.595963 kubelet[2717]: E0707 06:01:31.595885 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.596079 kubelet[2717]: E0707 06:01:31.596066 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.596079 kubelet[2717]: W0707 06:01:31.596075 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.596119 kubelet[2717]: E0707 06:01:31.596084 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.596243 kubelet[2717]: E0707 06:01:31.596231 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.596243 kubelet[2717]: W0707 06:01:31.596241 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.596288 kubelet[2717]: E0707 06:01:31.596248 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.596417 kubelet[2717]: E0707 06:01:31.596398 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.596417 kubelet[2717]: W0707 06:01:31.596407 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.596417 kubelet[2717]: E0707 06:01:31.596417 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.683858 containerd[1605]: time="2025-07-07T06:01:31.683739788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69spl,Uid:c48f67aa-df08-4b0d-8ecc-c3b7c31a690c,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:31.695838 kubelet[2717]: E0707 06:01:31.695768 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.695838 kubelet[2717]: W0707 06:01:31.695809 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.695838 kubelet[2717]: E0707 06:01:31.695841 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.696197 kubelet[2717]: E0707 06:01:31.696178 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.696197 kubelet[2717]: W0707 06:01:31.696192 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.696249 kubelet[2717]: E0707 06:01:31.696213 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.696458 kubelet[2717]: E0707 06:01:31.696409 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.696458 kubelet[2717]: W0707 06:01:31.696422 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.696608 kubelet[2717]: E0707 06:01:31.696435 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.696714 kubelet[2717]: E0707 06:01:31.696694 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.696714 kubelet[2717]: W0707 06:01:31.696705 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.696769 kubelet[2717]: E0707 06:01:31.696719 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.697029 kubelet[2717]: E0707 06:01:31.696997 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.697056 kubelet[2717]: W0707 06:01:31.697025 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.697092 kubelet[2717]: E0707 06:01:31.697063 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.697436 kubelet[2717]: E0707 06:01:31.697413 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.697436 kubelet[2717]: W0707 06:01:31.697428 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.697577 kubelet[2717]: E0707 06:01:31.697476 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.697646 kubelet[2717]: E0707 06:01:31.697629 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.697646 kubelet[2717]: W0707 06:01:31.697643 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.697701 kubelet[2717]: E0707 06:01:31.697675 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.697887 kubelet[2717]: E0707 06:01:31.697858 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.697887 kubelet[2717]: W0707 06:01:31.697878 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.697909 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698127 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.698634 kubelet[2717]: W0707 06:01:31.698139 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698213 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698354 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.698634 kubelet[2717]: W0707 06:01:31.698364 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698393 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698596 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.698634 kubelet[2717]: W0707 06:01:31.698606 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.698634 kubelet[2717]: E0707 06:01:31.698625 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.698963 kubelet[2717]: E0707 06:01:31.698824 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.698963 kubelet[2717]: W0707 06:01:31.698835 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.698963 kubelet[2717]: E0707 06:01:31.698864 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.699056 kubelet[2717]: E0707 06:01:31.699048 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.699085 kubelet[2717]: W0707 06:01:31.699059 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.699118 kubelet[2717]: E0707 06:01:31.699086 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.699264 kubelet[2717]: E0707 06:01:31.699244 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.699264 kubelet[2717]: W0707 06:01:31.699259 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.699317 kubelet[2717]: E0707 06:01:31.699285 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.699491 kubelet[2717]: E0707 06:01:31.699470 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.699491 kubelet[2717]: W0707 06:01:31.699487 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.699548 kubelet[2717]: E0707 06:01:31.699515 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.699932 kubelet[2717]: E0707 06:01:31.699867 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.699981 kubelet[2717]: W0707 06:01:31.699907 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.700031 kubelet[2717]: E0707 06:01:31.700011 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.700397 kubelet[2717]: E0707 06:01:31.700277 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.700397 kubelet[2717]: W0707 06:01:31.700291 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.700397 kubelet[2717]: E0707 06:01:31.700332 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.700475 kubelet[2717]: E0707 06:01:31.700463 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.700475 kubelet[2717]: W0707 06:01:31.700472 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.700521 kubelet[2717]: E0707 06:01:31.700511 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.700646 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.701242 kubelet[2717]: W0707 06:01:31.700658 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.700702 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.700860 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.701242 kubelet[2717]: W0707 06:01:31.700869 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.700903 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.701114 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.701242 kubelet[2717]: W0707 06:01:31.701127 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.701242 kubelet[2717]: E0707 06:01:31.701152 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.701707 kubelet[2717]: E0707 06:01:31.701421 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.701707 kubelet[2717]: W0707 06:01:31.701432 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.701707 kubelet[2717]: E0707 06:01:31.701451 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.701707 kubelet[2717]: E0707 06:01:31.701691 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.701707 kubelet[2717]: W0707 06:01:31.701700 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.702009 kubelet[2717]: E0707 06:01:31.701715 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.702009 kubelet[2717]: E0707 06:01:31.701956 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.702009 kubelet[2717]: W0707 06:01:31.701968 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.702009 kubelet[2717]: E0707 06:01:31.701986 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.702768 kubelet[2717]: E0707 06:01:31.702320 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.702768 kubelet[2717]: W0707 06:01:31.702334 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.702768 kubelet[2717]: E0707 06:01:31.702347 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.713307 kubelet[2717]: E0707 06:01:31.713274 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:31.713307 kubelet[2717]: W0707 06:01:31.713296 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:31.713477 kubelet[2717]: E0707 06:01:31.713319 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:31.720755 containerd[1605]: time="2025-07-07T06:01:31.720692329Z" level=info msg="connecting to shim e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc" address="unix:///run/containerd/s/47f37d6bf98a4e4b9f8ef13a8942511dc86362f955e7d0b95ac5ba42088e6bbe" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:01:31.753072 systemd[1]: Started cri-containerd-e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc.scope - libcontainer container e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc. Jul 7 06:01:31.788419 containerd[1605]: time="2025-07-07T06:01:31.788357168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69spl,Uid:c48f67aa-df08-4b0d-8ecc-c3b7c31a690c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\"" Jul 7 06:01:33.088144 kubelet[2717]: E0707 06:01:33.088004 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:35.087364 kubelet[2717]: E0707 06:01:35.087264 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:35.473079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511957486.mount: Deactivated successfully. Jul 7 06:01:36.914813 containerd[1605]: time="2025-07-07T06:01:36.914724329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:36.929758 containerd[1605]: time="2025-07-07T06:01:36.929690382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:01:36.956795 containerd[1605]: time="2025-07-07T06:01:36.956699144Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:37.018717 containerd[1605]: time="2025-07-07T06:01:37.018600345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:37.019581 containerd[1605]: time="2025-07-07T06:01:37.019509529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 5.488693788s" Jul 7 06:01:37.019581 containerd[1605]: time="2025-07-07T06:01:37.019558341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:01:37.020964 containerd[1605]: time="2025-07-07T06:01:37.020907003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:01:37.032301 containerd[1605]: time="2025-07-07T06:01:37.032242392Z" level=info msg="CreateContainer within sandbox \"622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:01:37.085371 kubelet[2717]: E0707 06:01:37.085296 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:37.547857 containerd[1605]: time="2025-07-07T06:01:37.547754787Z" level=info msg="Container 1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:38.256578 containerd[1605]: time="2025-07-07T06:01:38.256522036Z" level=info msg="CreateContainer within sandbox \"622e365a45380f6d440f20a62546dd885fd4cb63542bdfb9c8b6a4e5e94fb571\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04\"" Jul 7 06:01:38.257149 containerd[1605]: time="2025-07-07T06:01:38.257057044Z" level=info msg="StartContainer for \"1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04\"" Jul 7 06:01:38.258837 containerd[1605]: time="2025-07-07T06:01:38.258796141Z" level=info msg="connecting to shim 1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04" address="unix:///run/containerd/s/750399779173452e6bb820915815ed01691c5d04d70b5ab701310a47420039bc" protocol=ttrpc version=3 Jul 7 06:01:38.286278 systemd[1]: Started cri-containerd-1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04.scope - libcontainer container 1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04. Jul 7 06:01:38.610832 containerd[1605]: time="2025-07-07T06:01:38.610781943Z" level=info msg="StartContainer for \"1257e2526ecd9c6f96bd41b2b167da268b530b6e55c62407dbd0e32574f63b04\" returns successfully" Jul 7 06:01:39.085572 kubelet[2717]: E0707 06:01:39.085482 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:39.183437 kubelet[2717]: E0707 06:01:39.183395 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:39.255880 kubelet[2717]: E0707 06:01:39.255832 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.255880 kubelet[2717]: W0707 06:01:39.255860 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.255880 kubelet[2717]: E0707 06:01:39.255887 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.256227 kubelet[2717]: E0707 06:01:39.256203 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.256227 kubelet[2717]: W0707 06:01:39.256213 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.256227 kubelet[2717]: E0707 06:01:39.256223 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.256402 kubelet[2717]: E0707 06:01:39.256381 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.256402 kubelet[2717]: W0707 06:01:39.256391 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.256402 kubelet[2717]: E0707 06:01:39.256399 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.256576 kubelet[2717]: E0707 06:01:39.256553 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.256576 kubelet[2717]: W0707 06:01:39.256563 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.256576 kubelet[2717]: E0707 06:01:39.256571 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.256755 kubelet[2717]: E0707 06:01:39.256738 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.256755 kubelet[2717]: W0707 06:01:39.256747 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.256755 kubelet[2717]: E0707 06:01:39.256755 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.256929 kubelet[2717]: E0707 06:01:39.256904 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.256955 kubelet[2717]: W0707 06:01:39.256913 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.256955 kubelet[2717]: E0707 06:01:39.256940 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.257105 kubelet[2717]: E0707 06:01:39.257091 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.257105 kubelet[2717]: W0707 06:01:39.257100 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.257156 kubelet[2717]: E0707 06:01:39.257110 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.257281 kubelet[2717]: E0707 06:01:39.257266 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.257281 kubelet[2717]: W0707 06:01:39.257276 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.257332 kubelet[2717]: E0707 06:01:39.257287 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.257492 kubelet[2717]: E0707 06:01:39.257464 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.257492 kubelet[2717]: W0707 06:01:39.257476 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.257492 kubelet[2717]: E0707 06:01:39.257484 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.257678 kubelet[2717]: E0707 06:01:39.257655 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.257678 kubelet[2717]: W0707 06:01:39.257671 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.257736 kubelet[2717]: E0707 06:01:39.257683 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.257876 kubelet[2717]: E0707 06:01:39.257858 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.257876 kubelet[2717]: W0707 06:01:39.257868 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.257876 kubelet[2717]: E0707 06:01:39.257876 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.258079 kubelet[2717]: E0707 06:01:39.258066 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.258079 kubelet[2717]: W0707 06:01:39.258075 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.258147 kubelet[2717]: E0707 06:01:39.258083 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.258308 kubelet[2717]: E0707 06:01:39.258258 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.258308 kubelet[2717]: W0707 06:01:39.258269 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.258308 kubelet[2717]: E0707 06:01:39.258277 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.258574 kubelet[2717]: E0707 06:01:39.258433 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.258574 kubelet[2717]: W0707 06:01:39.258442 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.258574 kubelet[2717]: E0707 06:01:39.258450 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.258720 kubelet[2717]: E0707 06:01:39.258609 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.258720 kubelet[2717]: W0707 06:01:39.258616 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.258720 kubelet[2717]: E0707 06:01:39.258624 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.351691 kubelet[2717]: E0707 06:01:39.351497 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.351691 kubelet[2717]: W0707 06:01:39.351526 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.351691 kubelet[2717]: E0707 06:01:39.351557 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.351959 kubelet[2717]: E0707 06:01:39.351883 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.351959 kubelet[2717]: W0707 06:01:39.351907 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.351959 kubelet[2717]: E0707 06:01:39.351949 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.352485 kubelet[2717]: E0707 06:01:39.352454 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.352485 kubelet[2717]: W0707 06:01:39.352468 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.352485 kubelet[2717]: E0707 06:01:39.352486 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.352759 kubelet[2717]: E0707 06:01:39.352713 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.352759 kubelet[2717]: W0707 06:01:39.352727 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.352759 kubelet[2717]: E0707 06:01:39.352750 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.353055 kubelet[2717]: E0707 06:01:39.353003 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.353055 kubelet[2717]: W0707 06:01:39.353017 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.353055 kubelet[2717]: E0707 06:01:39.353063 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.353294 kubelet[2717]: E0707 06:01:39.353277 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.353294 kubelet[2717]: W0707 06:01:39.353289 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.353339 kubelet[2717]: E0707 06:01:39.353322 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.353529 kubelet[2717]: E0707 06:01:39.353510 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.353529 kubelet[2717]: W0707 06:01:39.353527 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.353659 kubelet[2717]: E0707 06:01:39.353619 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.353780 kubelet[2717]: E0707 06:01:39.353762 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.353780 kubelet[2717]: W0707 06:01:39.353776 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.353830 kubelet[2717]: E0707 06:01:39.353795 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.354110 kubelet[2717]: E0707 06:01:39.354081 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.354110 kubelet[2717]: W0707 06:01:39.354097 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.354182 kubelet[2717]: E0707 06:01:39.354116 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.354456 kubelet[2717]: E0707 06:01:39.354431 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.354492 kubelet[2717]: W0707 06:01:39.354454 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.354492 kubelet[2717]: E0707 06:01:39.354482 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.354720 kubelet[2717]: E0707 06:01:39.354689 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.354720 kubelet[2717]: W0707 06:01:39.354705 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.354720 kubelet[2717]: E0707 06:01:39.354723 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.355054 kubelet[2717]: E0707 06:01:39.355031 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.355093 kubelet[2717]: W0707 06:01:39.355052 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.355093 kubelet[2717]: E0707 06:01:39.355074 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.355314 kubelet[2717]: E0707 06:01:39.355295 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.355314 kubelet[2717]: W0707 06:01:39.355310 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.355373 kubelet[2717]: E0707 06:01:39.355328 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.355726 kubelet[2717]: E0707 06:01:39.355695 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.355776 kubelet[2717]: W0707 06:01:39.355725 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.355776 kubelet[2717]: E0707 06:01:39.355762 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.356054 kubelet[2717]: E0707 06:01:39.356027 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.356054 kubelet[2717]: W0707 06:01:39.356041 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.356054 kubelet[2717]: E0707 06:01:39.356056 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.356384 kubelet[2717]: E0707 06:01:39.356362 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.356384 kubelet[2717]: W0707 06:01:39.356379 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.356445 kubelet[2717]: E0707 06:01:39.356397 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.356664 kubelet[2717]: E0707 06:01:39.356638 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.356664 kubelet[2717]: W0707 06:01:39.356653 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.356664 kubelet[2717]: E0707 06:01:39.356663 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.357434 kubelet[2717]: E0707 06:01:39.357414 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:39.357434 kubelet[2717]: W0707 06:01:39.357430 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:39.357488 kubelet[2717]: E0707 06:01:39.357443 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:39.361294 kubelet[2717]: I0707 06:01:39.361202 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7588df647b-9mhd8" podStartSLOduration=3.862183422 podStartE2EDuration="9.36117792s" podCreationTimestamp="2025-07-07 06:01:30 +0000 UTC" firstStartedPulling="2025-07-07 06:01:31.521752223 +0000 UTC m=+19.603000223" lastFinishedPulling="2025-07-07 06:01:37.020746721 +0000 UTC m=+25.101994721" observedRunningTime="2025-07-07 06:01:39.360650506 +0000 UTC m=+27.441898506" watchObservedRunningTime="2025-07-07 06:01:39.36117792 +0000 UTC m=+27.442425920" Jul 7 06:01:40.184331 kubelet[2717]: I0707 06:01:40.184293 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:01:40.184814 kubelet[2717]: E0707 06:01:40.184697 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:40.266402 kubelet[2717]: E0707 06:01:40.266351 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.266402 kubelet[2717]: W0707 06:01:40.266370 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.266402 kubelet[2717]: E0707 06:01:40.266389 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.266612 kubelet[2717]: E0707 06:01:40.266589 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.266612 kubelet[2717]: W0707 06:01:40.266597 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.266667 kubelet[2717]: E0707 06:01:40.266619 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.266808 kubelet[2717]: E0707 06:01:40.266791 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.266808 kubelet[2717]: W0707 06:01:40.266801 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.266808 kubelet[2717]: E0707 06:01:40.266809 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267023 kubelet[2717]: E0707 06:01:40.266994 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267023 kubelet[2717]: W0707 06:01:40.267005 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267023 kubelet[2717]: E0707 06:01:40.267014 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267202 kubelet[2717]: E0707 06:01:40.267176 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267202 kubelet[2717]: W0707 06:01:40.267186 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267202 kubelet[2717]: E0707 06:01:40.267195 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267387 kubelet[2717]: E0707 06:01:40.267362 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267387 kubelet[2717]: W0707 06:01:40.267375 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267387 kubelet[2717]: E0707 06:01:40.267383 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267555 kubelet[2717]: E0707 06:01:40.267539 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267555 kubelet[2717]: W0707 06:01:40.267548 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267624 kubelet[2717]: E0707 06:01:40.267558 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267743 kubelet[2717]: E0707 06:01:40.267725 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267743 kubelet[2717]: W0707 06:01:40.267735 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267897 kubelet[2717]: E0707 06:01:40.267742 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.267940 kubelet[2717]: E0707 06:01:40.267902 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.267940 kubelet[2717]: W0707 06:01:40.267910 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.267940 kubelet[2717]: E0707 06:01:40.267931 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.268099 kubelet[2717]: E0707 06:01:40.268082 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.268099 kubelet[2717]: W0707 06:01:40.268091 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.268152 kubelet[2717]: E0707 06:01:40.268099 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.268262 kubelet[2717]: E0707 06:01:40.268247 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.268262 kubelet[2717]: W0707 06:01:40.268255 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.268309 kubelet[2717]: E0707 06:01:40.268263 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.268471 kubelet[2717]: E0707 06:01:40.268453 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.268471 kubelet[2717]: W0707 06:01:40.268464 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.268529 kubelet[2717]: E0707 06:01:40.268473 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.268699 kubelet[2717]: E0707 06:01:40.268672 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.268699 kubelet[2717]: W0707 06:01:40.268685 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.268758 kubelet[2717]: E0707 06:01:40.268698 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.268889 kubelet[2717]: E0707 06:01:40.268873 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.268889 kubelet[2717]: W0707 06:01:40.268882 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.268963 kubelet[2717]: E0707 06:01:40.268890 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.269129 kubelet[2717]: E0707 06:01:40.269113 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.269129 kubelet[2717]: W0707 06:01:40.269122 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.269174 kubelet[2717]: E0707 06:01:40.269130 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.358512 kubelet[2717]: E0707 06:01:40.358457 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.358512 kubelet[2717]: W0707 06:01:40.358484 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.358512 kubelet[2717]: E0707 06:01:40.358507 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.358859 kubelet[2717]: E0707 06:01:40.358810 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.358859 kubelet[2717]: W0707 06:01:40.358847 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.358938 kubelet[2717]: E0707 06:01:40.358880 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.359161 kubelet[2717]: E0707 06:01:40.359135 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.359161 kubelet[2717]: W0707 06:01:40.359149 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.359231 kubelet[2717]: E0707 06:01:40.359165 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.359422 kubelet[2717]: E0707 06:01:40.359392 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.359422 kubelet[2717]: W0707 06:01:40.359405 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.359422 kubelet[2717]: E0707 06:01:40.359418 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.359641 kubelet[2717]: E0707 06:01:40.359629 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.359664 kubelet[2717]: W0707 06:01:40.359641 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.359703 kubelet[2717]: E0707 06:01:40.359664 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.359960 kubelet[2717]: E0707 06:01:40.359944 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.359960 kubelet[2717]: W0707 06:01:40.359958 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.360014 kubelet[2717]: E0707 06:01:40.359978 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.360264 kubelet[2717]: E0707 06:01:40.360237 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.360264 kubelet[2717]: W0707 06:01:40.360260 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.360340 kubelet[2717]: E0707 06:01:40.360275 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.360486 kubelet[2717]: E0707 06:01:40.360470 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.360486 kubelet[2717]: W0707 06:01:40.360482 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.360564 kubelet[2717]: E0707 06:01:40.360549 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.360667 kubelet[2717]: E0707 06:01:40.360650 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.360667 kubelet[2717]: W0707 06:01:40.360662 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.360720 kubelet[2717]: E0707 06:01:40.360695 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.360827 kubelet[2717]: E0707 06:01:40.360813 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.360827 kubelet[2717]: W0707 06:01:40.360826 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.360880 kubelet[2717]: E0707 06:01:40.360841 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.361053 kubelet[2717]: E0707 06:01:40.361037 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.361086 kubelet[2717]: W0707 06:01:40.361051 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.361086 kubelet[2717]: E0707 06:01:40.361070 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.361616 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.362321 kubelet[2717]: W0707 06:01:40.361635 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.361658 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.361874 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.362321 kubelet[2717]: W0707 06:01:40.361882 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.361909 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.362109 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.362321 kubelet[2717]: W0707 06:01:40.362123 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.362143 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.362321 kubelet[2717]: E0707 06:01:40.362319 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.362552 kubelet[2717]: W0707 06:01:40.362327 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.362552 kubelet[2717]: E0707 06:01:40.362335 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.362609 kubelet[2717]: E0707 06:01:40.362555 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.362609 kubelet[2717]: W0707 06:01:40.362564 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.362609 kubelet[2717]: E0707 06:01:40.362585 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.363054 kubelet[2717]: E0707 06:01:40.363034 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.363054 kubelet[2717]: W0707 06:01:40.363047 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.363147 kubelet[2717]: E0707 06:01:40.363059 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:40.363309 kubelet[2717]: E0707 06:01:40.363295 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:01:40.363309 kubelet[2717]: W0707 06:01:40.363307 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:01:40.363355 kubelet[2717]: E0707 06:01:40.363316 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:01:41.085721 kubelet[2717]: E0707 06:01:41.085654 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:41.977888 containerd[1605]: time="2025-07-07T06:01:41.977801725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:42.029881 containerd[1605]: time="2025-07-07T06:01:42.029817942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:01:42.090474 containerd[1605]: time="2025-07-07T06:01:42.090414812Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:42.125438 containerd[1605]: time="2025-07-07T06:01:42.125350236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:42.126259 containerd[1605]: time="2025-07-07T06:01:42.126213430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 5.105151275s" Jul 7 06:01:42.126350 containerd[1605]: time="2025-07-07T06:01:42.126258685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:01:42.128594 containerd[1605]: time="2025-07-07T06:01:42.128492901Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:01:42.385335 containerd[1605]: time="2025-07-07T06:01:42.385271108Z" level=info msg="Container 1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:42.545564 containerd[1605]: time="2025-07-07T06:01:42.545470062Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\"" Jul 7 06:01:42.547943 containerd[1605]: time="2025-07-07T06:01:42.546207860Z" level=info msg="StartContainer for \"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\"" Jul 7 06:01:42.547943 containerd[1605]: time="2025-07-07T06:01:42.547826006Z" level=info msg="connecting to shim 1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f" address="unix:///run/containerd/s/47f37d6bf98a4e4b9f8ef13a8942511dc86362f955e7d0b95ac5ba42088e6bbe" protocol=ttrpc version=3 Jul 7 06:01:42.574192 systemd[1]: Started cri-containerd-1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f.scope - libcontainer container 1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f. Jul 7 06:01:42.644725 systemd[1]: cri-containerd-1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f.scope: Deactivated successfully. Jul 7 06:01:42.647434 containerd[1605]: time="2025-07-07T06:01:42.647388756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\" id:\"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\" pid:3497 exited_at:{seconds:1751868102 nanos:646948708}" Jul 7 06:01:42.682561 containerd[1605]: time="2025-07-07T06:01:42.682469444Z" level=info msg="received exit event container_id:\"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\" id:\"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\" pid:3497 exited_at:{seconds:1751868102 nanos:646948708}" Jul 7 06:01:42.686506 containerd[1605]: time="2025-07-07T06:01:42.686447913Z" level=info msg="StartContainer for \"1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f\" returns successfully" Jul 7 06:01:42.722837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a34fb456e431487e45cc89feaa15ca30684202235de780ffae264ab8218fb5f-rootfs.mount: Deactivated successfully. Jul 7 06:01:43.085480 kubelet[2717]: E0707 06:01:43.085393 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:44.197351 containerd[1605]: time="2025-07-07T06:01:44.197297813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:01:45.085503 kubelet[2717]: E0707 06:01:45.085410 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:47.085198 kubelet[2717]: E0707 06:01:47.085127 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:47.572206 containerd[1605]: time="2025-07-07T06:01:47.572131536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:47.572942 containerd[1605]: time="2025-07-07T06:01:47.572872870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:01:47.574087 containerd[1605]: time="2025-07-07T06:01:47.574042239Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:47.576362 containerd[1605]: time="2025-07-07T06:01:47.576314291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:01:47.576890 containerd[1605]: time="2025-07-07T06:01:47.576857924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.379512612s" Jul 7 06:01:47.576890 containerd[1605]: time="2025-07-07T06:01:47.576890595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:01:47.578990 containerd[1605]: time="2025-07-07T06:01:47.578960258Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:01:47.587163 containerd[1605]: time="2025-07-07T06:01:47.587111525Z" level=info msg="Container 8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:01:47.599153 containerd[1605]: time="2025-07-07T06:01:47.599092265Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\"" Jul 7 06:01:47.599866 containerd[1605]: time="2025-07-07T06:01:47.599750322Z" level=info msg="StartContainer for \"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\"" Jul 7 06:01:47.601348 containerd[1605]: time="2025-07-07T06:01:47.601319062Z" level=info msg="connecting to shim 8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23" address="unix:///run/containerd/s/47f37d6bf98a4e4b9f8ef13a8942511dc86362f955e7d0b95ac5ba42088e6bbe" protocol=ttrpc version=3 Jul 7 06:01:47.629117 systemd[1]: Started cri-containerd-8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23.scope - libcontainer container 8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23. Jul 7 06:01:47.678689 containerd[1605]: time="2025-07-07T06:01:47.678630755Z" level=info msg="StartContainer for \"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\" returns successfully" Jul 7 06:01:49.085702 kubelet[2717]: E0707 06:01:49.085608 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:49.607184 containerd[1605]: time="2025-07-07T06:01:49.607105067Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:01:49.610190 systemd[1]: cri-containerd-8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23.scope: Deactivated successfully. Jul 7 06:01:49.611007 systemd[1]: cri-containerd-8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23.scope: Consumed 694ms CPU time, 176.8M memory peak, 2.5M read from disk, 171.2M written to disk. Jul 7 06:01:49.611523 containerd[1605]: time="2025-07-07T06:01:49.611476124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\" id:\"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\" pid:3554 exited_at:{seconds:1751868109 nanos:611138519}" Jul 7 06:01:49.611715 containerd[1605]: time="2025-07-07T06:01:49.611551205Z" level=info msg="received exit event container_id:\"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\" id:\"8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23\" pid:3554 exited_at:{seconds:1751868109 nanos:611138519}" Jul 7 06:01:49.638084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bd1f2288ba661859ff9603f7ba3746357b3ded8ea0ac51ee4ef5440b83d5f23-rootfs.mount: Deactivated successfully. Jul 7 06:01:49.645477 kubelet[2717]: I0707 06:01:49.645434 2717 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:01:49.822133 kubelet[2717]: I0707 06:01:49.822046 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjtq\" (UniqueName: \"kubernetes.io/projected/8a00075b-b572-4b57-8401-3b70beb6e654-kube-api-access-btjtq\") pod \"calico-apiserver-f8b465d78-fb445\" (UID: \"8a00075b-b572-4b57-8401-3b70beb6e654\") " pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" Jul 7 06:01:49.822133 kubelet[2717]: I0707 06:01:49.822117 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6464p\" (UniqueName: \"kubernetes.io/projected/b1138488-8ab7-4741-b5a7-2cf298f8eec2-kube-api-access-6464p\") pod \"coredns-7c65d6cfc9-kgrgn\" (UID: \"b1138488-8ab7-4741-b5a7-2cf298f8eec2\") " pod="kube-system/coredns-7c65d6cfc9-kgrgn" Jul 7 06:01:49.822133 kubelet[2717]: I0707 06:01:49.822144 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/efb8e106-18be-4ff9-b824-5919247e4c5f-config-volume\") pod \"coredns-7c65d6cfc9-c9k89\" (UID: \"efb8e106-18be-4ff9-b824-5919247e4c5f\") " pod="kube-system/coredns-7c65d6cfc9-c9k89" Jul 7 06:01:49.822487 kubelet[2717]: I0707 06:01:49.822166 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqm5\" (UniqueName: \"kubernetes.io/projected/efb8e106-18be-4ff9-b824-5919247e4c5f-kube-api-access-rmqm5\") pod \"coredns-7c65d6cfc9-c9k89\" (UID: \"efb8e106-18be-4ff9-b824-5919247e4c5f\") " pod="kube-system/coredns-7c65d6cfc9-c9k89" Jul 7 06:01:49.822487 kubelet[2717]: I0707 06:01:49.822196 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-ca-bundle\") pod \"whisker-7d6794f54-ft7n9\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " pod="calico-system/whisker-7d6794f54-ft7n9" Jul 7 06:01:49.822487 kubelet[2717]: I0707 06:01:49.822218 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d4134cc-058b-4bbe-98ee-e7795fa91c73-calico-apiserver-certs\") pod \"calico-apiserver-f8b465d78-km6sq\" (UID: \"5d4134cc-058b-4bbe-98ee-e7795fa91c73\") " pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:01:49.822487 kubelet[2717]: I0707 06:01:49.822241 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a00075b-b572-4b57-8401-3b70beb6e654-calico-apiserver-certs\") pod \"calico-apiserver-f8b465d78-fb445\" (UID: \"8a00075b-b572-4b57-8401-3b70beb6e654\") " pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" Jul 7 06:01:49.822487 kubelet[2717]: I0707 06:01:49.822322 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nd8x\" (UniqueName: \"kubernetes.io/projected/5d4134cc-058b-4bbe-98ee-e7795fa91c73-kube-api-access-6nd8x\") pod \"calico-apiserver-f8b465d78-km6sq\" (UID: \"5d4134cc-058b-4bbe-98ee-e7795fa91c73\") " pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:01:49.822618 kubelet[2717]: I0707 06:01:49.822391 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2274c4d9-26b1-4d81-aa7e-f0fcf833e24f-goldmane-key-pair\") pod \"goldmane-58fd7646b9-fh2n2\" (UID: \"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f\") " pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:49.822618 kubelet[2717]: I0707 06:01:49.822431 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vk6\" (UniqueName: \"kubernetes.io/projected/2274c4d9-26b1-4d81-aa7e-f0fcf833e24f-kube-api-access-v9vk6\") pod \"goldmane-58fd7646b9-fh2n2\" (UID: \"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f\") " pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:49.822618 kubelet[2717]: I0707 06:01:49.822449 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d455e57-85ea-4ea4-a603-79ef8a7c208e-tigera-ca-bundle\") pod \"calico-kube-controllers-8485fd89d8-sc8rh\" (UID: \"2d455e57-85ea-4ea4-a603-79ef8a7c208e\") " pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:01:49.822618 kubelet[2717]: I0707 06:01:49.822491 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1138488-8ab7-4741-b5a7-2cf298f8eec2-config-volume\") pod \"coredns-7c65d6cfc9-kgrgn\" (UID: \"b1138488-8ab7-4741-b5a7-2cf298f8eec2\") " pod="kube-system/coredns-7c65d6cfc9-kgrgn" Jul 7 06:01:49.822618 kubelet[2717]: I0707 06:01:49.822516 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-backend-key-pair\") pod \"whisker-7d6794f54-ft7n9\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " pod="calico-system/whisker-7d6794f54-ft7n9" Jul 7 06:01:49.822736 kubelet[2717]: I0707 06:01:49.822541 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76gzz\" (UniqueName: \"kubernetes.io/projected/2d455e57-85ea-4ea4-a603-79ef8a7c208e-kube-api-access-76gzz\") pod \"calico-kube-controllers-8485fd89d8-sc8rh\" (UID: \"2d455e57-85ea-4ea4-a603-79ef8a7c208e\") " pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:01:49.822736 kubelet[2717]: I0707 06:01:49.822566 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwfc\" (UniqueName: \"kubernetes.io/projected/e4a80658-2331-4f04-bd09-2ecfccb869ed-kube-api-access-vmwfc\") pod \"whisker-7d6794f54-ft7n9\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " pod="calico-system/whisker-7d6794f54-ft7n9" Jul 7 06:01:49.822736 kubelet[2717]: I0707 06:01:49.822589 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2274c4d9-26b1-4d81-aa7e-f0fcf833e24f-config\") pod \"goldmane-58fd7646b9-fh2n2\" (UID: \"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f\") " pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:49.822736 kubelet[2717]: I0707 06:01:49.822610 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2274c4d9-26b1-4d81-aa7e-f0fcf833e24f-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-fh2n2\" (UID: \"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f\") " pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:49.835865 systemd[1]: Created slice kubepods-besteffort-pod8a00075b_b572_4b57_8401_3b70beb6e654.slice - libcontainer container kubepods-besteffort-pod8a00075b_b572_4b57_8401_3b70beb6e654.slice. Jul 7 06:01:49.844537 systemd[1]: Created slice kubepods-burstable-podb1138488_8ab7_4741_b5a7_2cf298f8eec2.slice - libcontainer container kubepods-burstable-podb1138488_8ab7_4741_b5a7_2cf298f8eec2.slice. Jul 7 06:01:49.852165 systemd[1]: Created slice kubepods-besteffort-pode4a80658_2331_4f04_bd09_2ecfccb869ed.slice - libcontainer container kubepods-besteffort-pode4a80658_2331_4f04_bd09_2ecfccb869ed.slice. Jul 7 06:01:49.859812 systemd[1]: Created slice kubepods-besteffort-pod2274c4d9_26b1_4d81_aa7e_f0fcf833e24f.slice - libcontainer container kubepods-besteffort-pod2274c4d9_26b1_4d81_aa7e_f0fcf833e24f.slice. Jul 7 06:01:49.866880 systemd[1]: Created slice kubepods-besteffort-pod5d4134cc_058b_4bbe_98ee_e7795fa91c73.slice - libcontainer container kubepods-besteffort-pod5d4134cc_058b_4bbe_98ee_e7795fa91c73.slice. Jul 7 06:01:49.872902 systemd[1]: Created slice kubepods-besteffort-pod2d455e57_85ea_4ea4_a603_79ef8a7c208e.slice - libcontainer container kubepods-besteffort-pod2d455e57_85ea_4ea4_a603_79ef8a7c208e.slice. Jul 7 06:01:49.878287 systemd[1]: Created slice kubepods-burstable-podefb8e106_18be_4ff9_b824_5919247e4c5f.slice - libcontainer container kubepods-burstable-podefb8e106_18be_4ff9_b824_5919247e4c5f.slice. Jul 7 06:01:50.142808 containerd[1605]: time="2025-07-07T06:01:50.142650458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-fb445,Uid:8a00075b-b572-4b57-8401-3b70beb6e654,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:01:50.149213 kubelet[2717]: E0707 06:01:50.149156 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:50.150131 containerd[1605]: time="2025-07-07T06:01:50.150085012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgrgn,Uid:b1138488-8ab7-4741-b5a7-2cf298f8eec2,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:50.158242 containerd[1605]: time="2025-07-07T06:01:50.158184054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d6794f54-ft7n9,Uid:e4a80658-2331-4f04-bd09-2ecfccb869ed,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:50.164404 containerd[1605]: time="2025-07-07T06:01:50.164335486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fh2n2,Uid:2274c4d9-26b1-4d81-aa7e-f0fcf833e24f,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:50.171348 containerd[1605]: time="2025-07-07T06:01:50.171269770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:01:50.176721 containerd[1605]: time="2025-07-07T06:01:50.176672115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:50.181778 kubelet[2717]: E0707 06:01:50.181434 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:50.183462 containerd[1605]: time="2025-07-07T06:01:50.182626536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c9k89,Uid:efb8e106-18be-4ff9-b824-5919247e4c5f,Namespace:kube-system,Attempt:0,}" Jul 7 06:01:50.227759 containerd[1605]: time="2025-07-07T06:01:50.227708611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:01:50.291201 containerd[1605]: time="2025-07-07T06:01:50.291145957Z" level=error msg="Failed to destroy network for sandbox \"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.300360 containerd[1605]: time="2025-07-07T06:01:50.300271789Z" level=error msg="Failed to destroy network for sandbox \"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.313411 containerd[1605]: time="2025-07-07T06:01:50.313247709Z" level=error msg="Failed to destroy network for sandbox \"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.318259 containerd[1605]: time="2025-07-07T06:01:50.318217991Z" level=error msg="Failed to destroy network for sandbox \"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.347126 containerd[1605]: time="2025-07-07T06:01:50.346307337Z" level=error msg="Failed to destroy network for sandbox \"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.364974 containerd[1605]: time="2025-07-07T06:01:50.364884064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-fb445,Uid:8a00075b-b572-4b57-8401-3b70beb6e654,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.365224 containerd[1605]: time="2025-07-07T06:01:50.364993749Z" level=error msg="Failed to destroy network for sandbox \"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.365377 containerd[1605]: time="2025-07-07T06:01:50.365310064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d6794f54-ft7n9,Uid:e4a80658-2331-4f04-bd09-2ecfccb869ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.365377 containerd[1605]: time="2025-07-07T06:01:50.365341322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgrgn,Uid:b1138488-8ab7-4741-b5a7-2cf298f8eec2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.365499 containerd[1605]: time="2025-07-07T06:01:50.365316676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.365499 containerd[1605]: time="2025-07-07T06:01:50.364944517Z" level=error msg="Failed to destroy network for sandbox \"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.366420 containerd[1605]: time="2025-07-07T06:01:50.366340862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fh2n2,Uid:2274c4d9-26b1-4d81-aa7e-f0fcf833e24f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.367672 containerd[1605]: time="2025-07-07T06:01:50.367617140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.368935 containerd[1605]: time="2025-07-07T06:01:50.368836221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c9k89,Uid:efb8e106-18be-4ff9-b824-5919247e4c5f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377091 kubelet[2717]: E0707 06:01:50.376952 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377091 kubelet[2717]: E0707 06:01:50.376963 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377091 kubelet[2717]: E0707 06:01:50.377037 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377091 kubelet[2717]: E0707 06:01:50.377079 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:50.377380 kubelet[2717]: E0707 06:01:50.377107 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fh2n2" Jul 7 06:01:50.377380 kubelet[2717]: E0707 06:01:50.377109 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c9k89" Jul 7 06:01:50.377380 kubelet[2717]: E0707 06:01:50.377128 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" Jul 7 06:01:50.377380 kubelet[2717]: E0707 06:01:50.377141 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c9k89" Jul 7 06:01:50.377516 kubelet[2717]: E0707 06:01:50.377171 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" Jul 7 06:01:50.377516 kubelet[2717]: E0707 06:01:50.377210 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-c9k89_kube-system(efb8e106-18be-4ff9-b824-5919247e4c5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-c9k89_kube-system(efb8e106-18be-4ff9-b824-5919247e4c5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea6f3bc68567c07d02b919596b49f067ef8ee576031ad986c47236dc96227b53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c9k89" podUID="efb8e106-18be-4ff9-b824-5919247e4c5f" Jul 7 06:01:50.377516 kubelet[2717]: E0707 06:01:50.376956 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377663 kubelet[2717]: E0707 06:01:50.377224 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8b465d78-fb445_calico-apiserver(8a00075b-b572-4b57-8401-3b70beb6e654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8b465d78-fb445_calico-apiserver(8a00075b-b572-4b57-8401-3b70beb6e654)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b59b46b4c36483abc37344684a304e18abb5e3d9305a6d44a75792377bd1e40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" podUID="8a00075b-b572-4b57-8401-3b70beb6e654" Jul 7 06:01:50.377663 kubelet[2717]: E0707 06:01:50.377030 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377663 kubelet[2717]: E0707 06:01:50.377278 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d6794f54-ft7n9" Jul 7 06:01:50.377774 kubelet[2717]: E0707 06:01:50.377300 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d6794f54-ft7n9" Jul 7 06:01:50.377774 kubelet[2717]: E0707 06:01:50.377304 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kgrgn" Jul 7 06:01:50.377774 kubelet[2717]: E0707 06:01:50.377331 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d6794f54-ft7n9_calico-system(e4a80658-2331-4f04-bd09-2ecfccb869ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d6794f54-ft7n9_calico-system(e4a80658-2331-4f04-bd09-2ecfccb869ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6930365e38c55852b0863f76e2d4fb8bfe977af5db838ef69f96a28ab4d20a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d6794f54-ft7n9" podUID="e4a80658-2331-4f04-bd09-2ecfccb869ed" Jul 7 06:01:50.377861 kubelet[2717]: E0707 06:01:50.377340 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kgrgn" Jul 7 06:01:50.377861 kubelet[2717]: E0707 06:01:50.377046 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.377861 kubelet[2717]: E0707 06:01:50.377367 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kgrgn_kube-system(b1138488-8ab7-4741-b5a7-2cf298f8eec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kgrgn_kube-system(b1138488-8ab7-4741-b5a7-2cf298f8eec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb66f3738733e88039d2ac3a0faf2ffb10c6a80d0fb5e49116c8d03edfa8f81e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kgrgn" podUID="b1138488-8ab7-4741-b5a7-2cf298f8eec2" Jul 7 06:01:50.377971 kubelet[2717]: E0707 06:01:50.377374 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:01:50.377971 kubelet[2717]: E0707 06:01:50.377161 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fh2n2_calico-system(2274c4d9-26b1-4d81-aa7e-f0fcf833e24f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fh2n2_calico-system(2274c4d9-26b1-4d81-aa7e-f0fcf833e24f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3554f8aa3bfa4d09a10ddb61eea833b3fa1c95eec708c4c201e265799fd174cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fh2n2" podUID="2274c4d9-26b1-4d81-aa7e-f0fcf833e24f" Jul 7 06:01:50.377971 kubelet[2717]: E0707 06:01:50.377409 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:01:50.378074 kubelet[2717]: E0707 06:01:50.376956 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:50.378074 kubelet[2717]: E0707 06:01:50.377447 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8485fd89d8-sc8rh_calico-system(2d455e57-85ea-4ea4-a603-79ef8a7c208e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8485fd89d8-sc8rh_calico-system(2d455e57-85ea-4ea4-a603-79ef8a7c208e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"766e076bfd6940186ff6677abb8de2b04f7b51167824bea17008cfc2bce9d251\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" podUID="2d455e57-85ea-4ea4-a603-79ef8a7c208e" Jul 7 06:01:50.378074 kubelet[2717]: E0707 06:01:50.377455 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:01:50.378154 kubelet[2717]: E0707 06:01:50.377490 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:01:50.378154 kubelet[2717]: E0707 06:01:50.377533 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8b465d78-km6sq_calico-apiserver(5d4134cc-058b-4bbe-98ee-e7795fa91c73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8b465d78-km6sq_calico-apiserver(5d4134cc-058b-4bbe-98ee-e7795fa91c73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"842b99090a3387ecb9611d535503ec6f1126be625a0f9f8dab3bbe0a20080585\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" podUID="5d4134cc-058b-4bbe-98ee-e7795fa91c73" Jul 7 06:01:51.093171 systemd[1]: Created slice kubepods-besteffort-pod76bbbc29_f4c2_467c_acfb_9338c934762b.slice - libcontainer container kubepods-besteffort-pod76bbbc29_f4c2_467c_acfb_9338c934762b.slice. Jul 7 06:01:51.096285 containerd[1605]: time="2025-07-07T06:01:51.096241493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2hxc,Uid:76bbbc29-f4c2-467c-acfb-9338c934762b,Namespace:calico-system,Attempt:0,}" Jul 7 06:01:51.158272 containerd[1605]: time="2025-07-07T06:01:51.158201025Z" level=error msg="Failed to destroy network for sandbox \"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:51.160065 containerd[1605]: time="2025-07-07T06:01:51.160011587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2hxc,Uid:76bbbc29-f4c2-467c-acfb-9338c934762b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:51.160403 kubelet[2717]: E0707 06:01:51.160343 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:01:51.160733 kubelet[2717]: E0707 06:01:51.160439 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:51.160733 kubelet[2717]: E0707 06:01:51.160470 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q2hxc" Jul 7 06:01:51.160733 kubelet[2717]: E0707 06:01:51.160522 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q2hxc_calico-system(76bbbc29-f4c2-467c-acfb-9338c934762b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q2hxc_calico-system(76bbbc29-f4c2-467c-acfb-9338c934762b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d54b852b5b472b1a583f78d8cbd469c76053aec8509f4dcb5f6bb0645752478f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q2hxc" podUID="76bbbc29-f4c2-467c-acfb-9338c934762b" Jul 7 06:01:51.160681 systemd[1]: run-netns-cni\x2dec1439f9\x2d7964\x2dfe3d\x2d5cc6\x2ded86f0b67c9f.mount: Deactivated successfully. Jul 7 06:01:55.705027 kubelet[2717]: I0707 06:01:55.704954 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:01:55.709379 kubelet[2717]: E0707 06:01:55.709344 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:56.235566 kubelet[2717]: E0707 06:01:56.235529 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:01:59.200321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242907052.mount: Deactivated successfully. Jul 7 06:02:00.830341 containerd[1605]: time="2025-07-07T06:02:00.830248026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:00.830373 systemd[1]: Started sshd@7-10.0.0.17:22-10.0.0.1:34520.service - OpenSSH per-connection server daemon (10.0.0.1:34520). Jul 7 06:02:00.834170 containerd[1605]: time="2025-07-07T06:02:00.832519581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:02:00.841291 containerd[1605]: time="2025-07-07T06:02:00.841210209Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:00.852040 containerd[1605]: time="2025-07-07T06:02:00.852001039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:00.854315 containerd[1605]: time="2025-07-07T06:02:00.854222781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.626451662s" Jul 7 06:02:00.854414 containerd[1605]: time="2025-07-07T06:02:00.854318360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:02:00.896369 containerd[1605]: time="2025-07-07T06:02:00.896319260Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:02:00.914689 containerd[1605]: time="2025-07-07T06:02:00.914633306Z" level=info msg="Container 1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:00.936674 containerd[1605]: time="2025-07-07T06:02:00.936599719Z" level=info msg="CreateContainer within sandbox \"e870d96a3921fb6ceae934502d0beb0ab6015613037884b6a5f23343fff6fecc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\"" Jul 7 06:02:00.937946 containerd[1605]: time="2025-07-07T06:02:00.937486985Z" level=info msg="StartContainer for \"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\"" Jul 7 06:02:00.939220 containerd[1605]: time="2025-07-07T06:02:00.939187748Z" level=info msg="connecting to shim 1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9" address="unix:///run/containerd/s/47f37d6bf98a4e4b9f8ef13a8942511dc86362f955e7d0b95ac5ba42088e6bbe" protocol=ttrpc version=3 Jul 7 06:02:00.940296 sshd[3877]: Accepted publickey for core from 10.0.0.1 port 34520 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:00.942772 sshd-session[3877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:00.964594 systemd-logind[1584]: New session 8 of user core. Jul 7 06:02:00.969202 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:02:00.981102 systemd[1]: Started cri-containerd-1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9.scope - libcontainer container 1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9. Jul 7 06:02:01.086543 containerd[1605]: time="2025-07-07T06:02:01.086361045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:02:01.086543 containerd[1605]: time="2025-07-07T06:02:01.086401020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,}" Jul 7 06:02:01.135340 containerd[1605]: time="2025-07-07T06:02:01.135243698Z" level=info msg="StartContainer for \"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\" returns successfully" Jul 7 06:02:01.206387 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:02:01.206544 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:02:01.279489 containerd[1605]: time="2025-07-07T06:02:01.279401094Z" level=error msg="Failed to destroy network for sandbox \"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.281725 containerd[1605]: time="2025-07-07T06:02:01.281670746Z" level=error msg="Failed to destroy network for sandbox \"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.296611 containerd[1605]: time="2025-07-07T06:02:01.296531686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.297819 kubelet[2717]: E0707 06:02:01.297755 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.298455 kubelet[2717]: E0707 06:02:01.298071 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:02:01.298455 kubelet[2717]: E0707 06:02:01.298101 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" Jul 7 06:02:01.298546 kubelet[2717]: E0707 06:02:01.298207 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8b465d78-km6sq_calico-apiserver(5d4134cc-058b-4bbe-98ee-e7795fa91c73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8b465d78-km6sq_calico-apiserver(5d4134cc-058b-4bbe-98ee-e7795fa91c73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ab8525d47fa062974f2552f1b137b62d8ee7bbf1a59234b88bf0317ce4fecb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" podUID="5d4134cc-058b-4bbe-98ee-e7795fa91c73" Jul 7 06:02:01.354272 containerd[1605]: time="2025-07-07T06:02:01.354072735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.354476 kubelet[2717]: E0707 06:02:01.354431 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:02:01.354551 kubelet[2717]: E0707 06:02:01.354522 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:02:01.354591 kubelet[2717]: E0707 06:02:01.354550 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" Jul 7 06:02:01.354999 kubelet[2717]: E0707 06:02:01.354646 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8485fd89d8-sc8rh_calico-system(2d455e57-85ea-4ea4-a603-79ef8a7c208e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8485fd89d8-sc8rh_calico-system(2d455e57-85ea-4ea4-a603-79ef8a7c208e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7045ef3bf15c9b97988cfc79c39b6f13b1e50ea2bb2aded958599467d66d1a32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" podUID="2d455e57-85ea-4ea4-a603-79ef8a7c208e" Jul 7 06:02:01.420794 sshd[3892]: Connection closed by 10.0.0.1 port 34520 Jul 7 06:02:01.421321 sshd-session[3877]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:01.427334 systemd[1]: sshd@7-10.0.0.17:22-10.0.0.1:34520.service: Deactivated successfully. Jul 7 06:02:01.430256 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:02:01.431329 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:02:01.432936 systemd-logind[1584]: Removed session 8. Jul 7 06:02:01.873909 systemd[1]: run-netns-cni\x2de2f1e414\x2d0bbe\x2d6f7b\x2da048\x2d3fd47a32c57d.mount: Deactivated successfully. Jul 7 06:02:01.909257 kubelet[2717]: I0707 06:02:01.907103 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-ca-bundle\") pod \"e4a80658-2331-4f04-bd09-2ecfccb869ed\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " Jul 7 06:02:01.909257 kubelet[2717]: I0707 06:02:01.907186 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-backend-key-pair\") pod \"e4a80658-2331-4f04-bd09-2ecfccb869ed\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " Jul 7 06:02:01.909257 kubelet[2717]: I0707 06:02:01.907203 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmwfc\" (UniqueName: \"kubernetes.io/projected/e4a80658-2331-4f04-bd09-2ecfccb869ed-kube-api-access-vmwfc\") pod \"e4a80658-2331-4f04-bd09-2ecfccb869ed\" (UID: \"e4a80658-2331-4f04-bd09-2ecfccb869ed\") " Jul 7 06:02:01.910367 kubelet[2717]: I0707 06:02:01.910345 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e4a80658-2331-4f04-bd09-2ecfccb869ed" (UID: "e4a80658-2331-4f04-bd09-2ecfccb869ed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:02:01.916094 kubelet[2717]: I0707 06:02:01.916070 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e4a80658-2331-4f04-bd09-2ecfccb869ed" (UID: "e4a80658-2331-4f04-bd09-2ecfccb869ed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:02:01.916250 systemd[1]: var-lib-kubelet-pods-e4a80658\x2d2331\x2d4f04\x2dbd09\x2d2ecfccb869ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmwfc.mount: Deactivated successfully. Jul 7 06:02:01.916381 systemd[1]: var-lib-kubelet-pods-e4a80658\x2d2331\x2d4f04\x2dbd09\x2d2ecfccb869ed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:02:01.920163 kubelet[2717]: I0707 06:02:01.919972 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a80658-2331-4f04-bd09-2ecfccb869ed-kube-api-access-vmwfc" (OuterVolumeSpecName: "kube-api-access-vmwfc") pod "e4a80658-2331-4f04-bd09-2ecfccb869ed" (UID: "e4a80658-2331-4f04-bd09-2ecfccb869ed"). InnerVolumeSpecName "kube-api-access-vmwfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:02:02.008330 kubelet[2717]: I0707 06:02:02.008228 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:02:02.009061 kubelet[2717]: I0707 06:02:02.009015 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmwfc\" (UniqueName: \"kubernetes.io/projected/e4a80658-2331-4f04-bd09-2ecfccb869ed-kube-api-access-vmwfc\") on node \"localhost\" DevicePath \"\"" Jul 7 06:02:02.009721 containerd[1605]: time="2025-07-07T06:02:02.009474938Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\" id:\"7293b057dd491a7d4a59ecf1da63e859e71b78889a3359cd928eba2399d1454d\" pid:4023 exit_status:1 exited_at:{seconds:1751868122 nanos:8548288}" Jul 7 06:02:02.013474 kubelet[2717]: I0707 06:02:02.013378 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4a80658-2331-4f04-bd09-2ecfccb869ed-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:02:02.045735 kubelet[2717]: I0707 06:02:02.045545 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-69spl" podStartSLOduration=1.978441987 podStartE2EDuration="31.045523191s" podCreationTimestamp="2025-07-07 06:01:31 +0000 UTC" firstStartedPulling="2025-07-07 06:01:31.790051731 +0000 UTC m=+19.871299731" lastFinishedPulling="2025-07-07 06:02:00.857132935 +0000 UTC m=+48.938380935" observedRunningTime="2025-07-07 06:02:02.045301806 +0000 UTC m=+50.126549826" watchObservedRunningTime="2025-07-07 06:02:02.045523191 +0000 UTC m=+50.126771191" Jul 7 06:02:02.104520 systemd[1]: Removed slice kubepods-besteffort-pode4a80658_2331_4f04_bd09_2ecfccb869ed.slice - libcontainer container kubepods-besteffort-pode4a80658_2331_4f04_bd09_2ecfccb869ed.slice. Jul 7 06:02:02.895374 containerd[1605]: time="2025-07-07T06:02:02.895312723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\" id:\"f056ad2c034d807d280bf1dfa6acb0c4cf99a363e7ba5964d7d407ef73b68fbc\" pid:4048 exit_status:1 exited_at:{seconds:1751868122 nanos:894974579}" Jul 7 06:02:03.085960 kubelet[2717]: E0707 06:02:03.085869 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:03.086576 kubelet[2717]: E0707 06:02:03.086100 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:03.086654 containerd[1605]: time="2025-07-07T06:02:03.086591030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgrgn,Uid:b1138488-8ab7-4741-b5a7-2cf298f8eec2,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:03.087275 containerd[1605]: time="2025-07-07T06:02:03.087214661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c9k89,Uid:efb8e106-18be-4ff9-b824-5919247e4c5f,Namespace:kube-system,Attempt:0,}" Jul 7 06:02:03.087450 containerd[1605]: time="2025-07-07T06:02:03.087228557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2hxc,Uid:76bbbc29-f4c2-467c-acfb-9338c934762b,Namespace:calico-system,Attempt:0,}" Jul 7 06:02:03.187337 systemd[1]: Created slice kubepods-besteffort-pod83e35745_6665_4eae_aa28_c1501cc0e2bc.slice - libcontainer container kubepods-besteffort-pod83e35745_6665_4eae_aa28_c1501cc0e2bc.slice. Jul 7 06:02:03.322747 kubelet[2717]: I0707 06:02:03.322617 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/83e35745-6665-4eae-aa28-c1501cc0e2bc-whisker-backend-key-pair\") pod \"whisker-68db888878-2s6q7\" (UID: \"83e35745-6665-4eae-aa28-c1501cc0e2bc\") " pod="calico-system/whisker-68db888878-2s6q7" Jul 7 06:02:03.322747 kubelet[2717]: I0707 06:02:03.322758 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83e35745-6665-4eae-aa28-c1501cc0e2bc-whisker-ca-bundle\") pod \"whisker-68db888878-2s6q7\" (UID: \"83e35745-6665-4eae-aa28-c1501cc0e2bc\") " pod="calico-system/whisker-68db888878-2s6q7" Jul 7 06:02:03.323073 kubelet[2717]: I0707 06:02:03.322788 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knb95\" (UniqueName: \"kubernetes.io/projected/83e35745-6665-4eae-aa28-c1501cc0e2bc-kube-api-access-knb95\") pod \"whisker-68db888878-2s6q7\" (UID: \"83e35745-6665-4eae-aa28-c1501cc0e2bc\") " pod="calico-system/whisker-68db888878-2s6q7" Jul 7 06:02:03.495521 containerd[1605]: time="2025-07-07T06:02:03.495336199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68db888878-2s6q7,Uid:83e35745-6665-4eae-aa28-c1501cc0e2bc,Namespace:calico-system,Attempt:0,}" Jul 7 06:02:03.761629 systemd-networkd[1504]: calic0ad238aa15: Link UP Jul 7 06:02:03.761873 systemd-networkd[1504]: calic0ad238aa15: Gained carrier Jul 7 06:02:03.789659 containerd[1605]: 2025-07-07 06:02:03.228 [INFO][4078] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:03.789659 containerd[1605]: 2025-07-07 06:02:03.279 [INFO][4078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0 coredns-7c65d6cfc9- kube-system efb8e106-18be-4ff9-b824-5919247e4c5f 886 0 2025-07-07 06:01:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-c9k89 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0ad238aa15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-" Jul 7 06:02:03.789659 containerd[1605]: 2025-07-07 06:02:03.281 [INFO][4078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.789659 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" HandleID="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Workload="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" HandleID="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Workload="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-c9k89", "timestamp":"2025-07-07 06:02:03.487530526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.488 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.488 [INFO][4119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.687 [INFO][4119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" host="localhost" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.707 [INFO][4119] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.711 [INFO][4119] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.713 [INFO][4119] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.715 [INFO][4119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:03.790122 containerd[1605]: 2025-07-07 06:02:03.715 [INFO][4119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" host="localhost" Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.716 [INFO][4119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.733 [INFO][4119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" host="localhost" Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.742 [INFO][4119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" host="localhost" Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.742 [INFO][4119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" host="localhost" Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.742 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:03.790457 containerd[1605]: 2025-07-07 06:02:03.742 [INFO][4119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" HandleID="k8s-pod-network.0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Workload="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.790650 containerd[1605]: 2025-07-07 06:02:03.747 [INFO][4078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb8e106-18be-4ff9-b824-5919247e4c5f", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-c9k89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0ad238aa15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:03.790758 containerd[1605]: 2025-07-07 06:02:03.747 [INFO][4078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.790758 containerd[1605]: 2025-07-07 06:02:03.747 [INFO][4078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0ad238aa15 ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.790758 containerd[1605]: 2025-07-07 06:02:03.764 [INFO][4078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.790880 containerd[1605]: 2025-07-07 06:02:03.767 [INFO][4078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"efb8e106-18be-4ff9-b824-5919247e4c5f", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f", Pod:"coredns-7c65d6cfc9-c9k89", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0ad238aa15", MAC:"ee:5e:1d:78:6f:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:03.790880 containerd[1605]: 2025-07-07 06:02:03.784 [INFO][4078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c9k89" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c9k89-eth0" Jul 7 06:02:03.847708 systemd-networkd[1504]: cali2ecef49bd11: Link UP Jul 7 06:02:03.848850 systemd-networkd[1504]: cali2ecef49bd11: Gained carrier Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.221 [INFO][4073] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.279 [INFO][4073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0 coredns-7c65d6cfc9- kube-system b1138488-8ab7-4741-b5a7-2cf298f8eec2 887 0 2025-07-07 06:01:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kgrgn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ecef49bd11 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.281 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4118] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" HandleID="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Workload="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4118] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" HandleID="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Workload="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kgrgn", "timestamp":"2025-07-07 06:02:03.486982908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.742 [INFO][4118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.743 [INFO][4118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.788 [INFO][4118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.809 [INFO][4118] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.814 [INFO][4118] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.816 [INFO][4118] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.818 [INFO][4118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.819 [INFO][4118] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.820 [INFO][4118] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.825 [INFO][4118] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4118] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" host="localhost" Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:03.870230 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4118] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" HandleID="k8s-pod-network.b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Workload="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.843 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b1138488-8ab7-4741-b5a7-2cf298f8eec2", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kgrgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ecef49bd11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.843 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.843 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ecef49bd11 ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.849 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.849 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b1138488-8ab7-4741-b5a7-2cf298f8eec2", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b", Pod:"coredns-7c65d6cfc9-kgrgn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ecef49bd11", MAC:"56:dc:26:82:a3:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:03.871040 containerd[1605]: 2025-07-07 06:02:03.865 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kgrgn" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kgrgn-eth0" Jul 7 06:02:03.932456 containerd[1605]: time="2025-07-07T06:02:03.932363418Z" level=info msg="connecting to shim 0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f" address="unix:///run/containerd/s/9e889edbc3add61e7fb2f0d143df14d0509a5ccb6f83045a01fdbd8b548e5dbf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:03.950687 containerd[1605]: time="2025-07-07T06:02:03.950195583Z" level=info msg="connecting to shim b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b" address="unix:///run/containerd/s/d5ac991316da1ad9d4263588fbb2cb134eaf0f746e9c08641ea46899b498ab47" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:03.977587 systemd-networkd[1504]: calib62f4156d64: Link UP Jul 7 06:02:03.978196 systemd[1]: Started cri-containerd-0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f.scope - libcontainer container 0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f. Jul 7 06:02:03.979638 systemd-networkd[1504]: calib62f4156d64: Gained carrier Jul 7 06:02:04.016464 systemd[1]: Started cri-containerd-b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b.scope - libcontainer container b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b. Jul 7 06:02:04.047763 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:04.063111 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:04.088986 containerd[1605]: time="2025-07-07T06:02:04.088881972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fh2n2,Uid:2274c4d9-26b1-4d81-aa7e-f0fcf833e24f,Namespace:calico-system,Attempt:0,}" Jul 7 06:02:04.094525 kubelet[2717]: I0707 06:02:04.093639 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a80658-2331-4f04-bd09-2ecfccb869ed" path="/var/lib/kubelet/pods/e4a80658-2331-4f04-bd09-2ecfccb869ed/volumes" Jul 7 06:02:04.095998 containerd[1605]: time="2025-07-07T06:02:04.093448073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-fb445,Uid:8a00075b-b572-4b57-8401-3b70beb6e654,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.234 [INFO][4100] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.280 [INFO][4100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q2hxc-eth0 csi-node-driver- calico-system 76bbbc29-f4c2-467c-acfb-9338c934762b 747 0 2025-07-07 06:01:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-q2hxc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib62f4156d64 [] [] }} ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.281 [INFO][4100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.486 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" HandleID="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Workload="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" HandleID="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Workload="localhost-k8s-csi--node--driver--q2hxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000518f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q2hxc", "timestamp":"2025-07-07 06:02:03.486866059 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.487 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.838 [INFO][4116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.890 [INFO][4116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.907 [INFO][4116] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.918 [INFO][4116] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.922 [INFO][4116] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.927 [INFO][4116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.927 [INFO][4116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.930 [INFO][4116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5 Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.942 [INFO][4116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.957 [INFO][4116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.957 [INFO][4116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" host="localhost" Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.957 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:04.095998 containerd[1605]: 2025-07-07 06:02:03.957 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" HandleID="k8s-pod-network.df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Workload="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:03.971 [INFO][4100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2hxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76bbbc29-f4c2-467c-acfb-9338c934762b", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q2hxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib62f4156d64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:03.972 [INFO][4100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:03.972 [INFO][4100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib62f4156d64 ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:03.982 [INFO][4100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:03.983 [INFO][4100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q2hxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"76bbbc29-f4c2-467c-acfb-9338c934762b", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5", Pod:"csi-node-driver-q2hxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib62f4156d64", MAC:"c6:c4:ce:3d:57:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:04.096705 containerd[1605]: 2025-07-07 06:02:04.061 [INFO][4100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" Namespace="calico-system" Pod="csi-node-driver-q2hxc" WorkloadEndpoint="localhost-k8s-csi--node--driver--q2hxc-eth0" Jul 7 06:02:04.200799 containerd[1605]: time="2025-07-07T06:02:04.199823987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kgrgn,Uid:b1138488-8ab7-4741-b5a7-2cf298f8eec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b\"" Jul 7 06:02:04.203107 containerd[1605]: time="2025-07-07T06:02:04.202938574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c9k89,Uid:efb8e106-18be-4ff9-b824-5919247e4c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f\"" Jul 7 06:02:04.210004 kubelet[2717]: E0707 06:02:04.209942 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:04.210818 kubelet[2717]: E0707 06:02:04.210777 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:04.217311 containerd[1605]: time="2025-07-07T06:02:04.217074759Z" level=info msg="CreateContainer within sandbox \"b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:02:04.218675 containerd[1605]: time="2025-07-07T06:02:04.218126813Z" level=info msg="CreateContainer within sandbox \"0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:02:04.297778 systemd-networkd[1504]: cali83f6d27a189: Link UP Jul 7 06:02:04.300169 systemd-networkd[1504]: cali83f6d27a189: Gained carrier Jul 7 06:02:04.306701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168175547.mount: Deactivated successfully. Jul 7 06:02:04.324780 containerd[1605]: time="2025-07-07T06:02:04.324716890Z" level=info msg="Container 176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:04.326679 containerd[1605]: time="2025-07-07T06:02:04.326626945Z" level=info msg="Container 8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.714 [INFO][4143] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.738 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68db888878--2s6q7-eth0 whisker-68db888878- calico-system 83e35745-6665-4eae-aa28-c1501cc0e2bc 1028 0 2025-07-07 06:02:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68db888878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68db888878-2s6q7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali83f6d27a189 [] [] }} ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.738 [INFO][4143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.780 [INFO][4160] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" HandleID="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Workload="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.781 [INFO][4160] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" HandleID="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Workload="localhost-k8s-whisker--68db888878--2s6q7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68db888878-2s6q7", "timestamp":"2025-07-07 06:02:03.780817038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.781 [INFO][4160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.957 [INFO][4160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:03.958 [INFO][4160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.000 [INFO][4160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.058 [INFO][4160] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.095 [INFO][4160] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.116 [INFO][4160] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.144 [INFO][4160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.144 [INFO][4160] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.164 [INFO][4160] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5 Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.191 [INFO][4160] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.229 [INFO][4160] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.236 [INFO][4160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" host="localhost" Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.238 [INFO][4160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:04.351288 containerd[1605]: 2025-07-07 06:02:04.239 [INFO][4160] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" HandleID="k8s-pod-network.9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Workload="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.269 [INFO][4143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68db888878--2s6q7-eth0", GenerateName:"whisker-68db888878-", Namespace:"calico-system", SelfLink:"", UID:"83e35745-6665-4eae-aa28-c1501cc0e2bc", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68db888878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68db888878-2s6q7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83f6d27a189", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.273 [INFO][4143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.273 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83f6d27a189 ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.322 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.325 [INFO][4143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68db888878--2s6q7-eth0", GenerateName:"whisker-68db888878-", Namespace:"calico-system", SelfLink:"", UID:"83e35745-6665-4eae-aa28-c1501cc0e2bc", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 2, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68db888878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5", Pod:"whisker-68db888878-2s6q7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali83f6d27a189", MAC:"be:3e:8d:92:4e:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:04.352198 containerd[1605]: 2025-07-07 06:02:04.344 [INFO][4143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" Namespace="calico-system" Pod="whisker-68db888878-2s6q7" WorkloadEndpoint="localhost-k8s-whisker--68db888878--2s6q7-eth0" Jul 7 06:02:04.365834 containerd[1605]: time="2025-07-07T06:02:04.360894898Z" level=info msg="CreateContainer within sandbox \"0f4b15268586023c9c3c32ab07b300ac869e55ec9e904b5001712ae47b96381f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250\"" Jul 7 06:02:04.365834 containerd[1605]: time="2025-07-07T06:02:04.363372338Z" level=info msg="connecting to shim df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5" address="unix:///run/containerd/s/ac3061e94d6b3fbf8383c5aece3fdb46ac6d80e47683427207adff5eee1344d0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:04.369246 containerd[1605]: time="2025-07-07T06:02:04.369190840Z" level=info msg="StartContainer for \"8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250\"" Jul 7 06:02:04.370489 containerd[1605]: time="2025-07-07T06:02:04.370138108Z" level=info msg="connecting to shim 8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250" address="unix:///run/containerd/s/9e889edbc3add61e7fb2f0d143df14d0509a5ccb6f83045a01fdbd8b548e5dbf" protocol=ttrpc version=3 Jul 7 06:02:04.373192 containerd[1605]: time="2025-07-07T06:02:04.373149761Z" level=info msg="CreateContainer within sandbox \"b0161cad6526f9cf1fd82260221502fee0af7a6193e1400fdfad00dd6150095b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2\"" Jul 7 06:02:04.375990 containerd[1605]: time="2025-07-07T06:02:04.375939578Z" level=info msg="StartContainer for \"176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2\"" Jul 7 06:02:04.387014 containerd[1605]: time="2025-07-07T06:02:04.386488239Z" level=info msg="connecting to shim 176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2" address="unix:///run/containerd/s/d5ac991316da1ad9d4263588fbb2cb134eaf0f746e9c08641ea46899b498ab47" protocol=ttrpc version=3 Jul 7 06:02:04.462282 systemd[1]: Started cri-containerd-df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5.scope - libcontainer container df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5. Jul 7 06:02:04.481261 systemd[1]: Started cri-containerd-8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250.scope - libcontainer container 8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250. Jul 7 06:02:04.514875 systemd[1]: Started cri-containerd-176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2.scope - libcontainer container 176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2. Jul 7 06:02:04.518700 containerd[1605]: time="2025-07-07T06:02:04.518647721Z" level=info msg="connecting to shim 9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5" address="unix:///run/containerd/s/689e473565d07175aed119be25e4d6607f8fcc1ea45f1dcd5d38307d4065d212" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:04.554387 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:04.568273 systemd[1]: Started cri-containerd-9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5.scope - libcontainer container 9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5. Jul 7 06:02:04.618106 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:04.892277 systemd-networkd[1504]: calic0ad238aa15: Gained IPv6LL Jul 7 06:02:04.945964 systemd-networkd[1504]: cali593f3d795ee: Link UP Jul 7 06:02:04.946744 systemd-networkd[1504]: cali593f3d795ee: Gained carrier Jul 7 06:02:05.073032 containerd[1605]: time="2025-07-07T06:02:05.072976776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q2hxc,Uid:76bbbc29-f4c2-467c-acfb-9338c934762b,Namespace:calico-system,Attempt:0,} returns sandbox id \"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5\"" Jul 7 06:02:05.073392 containerd[1605]: time="2025-07-07T06:02:05.073321844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68db888878-2s6q7,Uid:83e35745-6665-4eae-aa28-c1501cc0e2bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5\"" Jul 7 06:02:05.074545 containerd[1605]: time="2025-07-07T06:02:05.074515494Z" level=info msg="StartContainer for \"176ef634c3f99e5f0f066bf5af62f8248c0eabaf88d033de8fd55b445e839fe2\" returns successfully" Jul 7 06:02:05.075222 containerd[1605]: time="2025-07-07T06:02:05.075018298Z" level=info msg="StartContainer for \"8a4faf6e455de7e67718f03d97c69c64ae283759d3b33d7c71d2ca99831f0250\" returns successfully" Jul 7 06:02:05.076704 containerd[1605]: time="2025-07-07T06:02:05.076675779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:02:05.083744 kubelet[2717]: E0707 06:02:05.083644 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.254 [INFO][4364] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.322 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0 goldmane-58fd7646b9- calico-system 2274c4d9-26b1-4d81-aa7e-f0fcf833e24f 888 0 2025-07-07 06:01:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-fh2n2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali593f3d795ee [] [] }} ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.322 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.427 [INFO][4421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" HandleID="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Workload="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.428 [INFO][4421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" HandleID="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Workload="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000188920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-fh2n2", "timestamp":"2025-07-07 06:02:04.427789795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.428 [INFO][4421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.434 [INFO][4421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.435 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.465 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.495 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.518 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.527 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.534 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.534 [INFO][4421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.539 [INFO][4421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1 Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.594 [INFO][4421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" host="localhost" Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:05.247434 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" HandleID="k8s-pod-network.d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Workload="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:04.943 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-fh2n2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali593f3d795ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:04.943 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:04.943 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali593f3d795ee ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:04.946 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:04.947 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2274c4d9-26b1-4d81-aa7e-f0fcf833e24f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1", Pod:"goldmane-58fd7646b9-fh2n2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali593f3d795ee", MAC:"3a:6f:04:b4:88:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:05.250738 containerd[1605]: 2025-07-07 06:02:05.238 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" Namespace="calico-system" Pod="goldmane-58fd7646b9-fh2n2" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fh2n2-eth0" Jul 7 06:02:05.276185 systemd-networkd[1504]: calib62f4156d64: Gained IPv6LL Jul 7 06:02:05.468157 systemd-networkd[1504]: cali83f6d27a189: Gained IPv6LL Jul 7 06:02:05.916096 systemd-networkd[1504]: cali2ecef49bd11: Gained IPv6LL Jul 7 06:02:06.087902 kubelet[2717]: E0707 06:02:06.087861 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:06.089006 kubelet[2717]: E0707 06:02:06.088987 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:06.196886 systemd-networkd[1504]: vxlan.calico: Link UP Jul 7 06:02:06.196895 systemd-networkd[1504]: vxlan.calico: Gained carrier Jul 7 06:02:06.399536 systemd-networkd[1504]: califb41991d068: Link UP Jul 7 06:02:06.400268 systemd-networkd[1504]: califb41991d068: Gained carrier Jul 7 06:02:06.432250 systemd[1]: Started sshd@8-10.0.0.17:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Jul 7 06:02:06.557078 systemd-networkd[1504]: cali593f3d795ee: Gained IPv6LL Jul 7 06:02:06.649625 kubelet[2717]: I0707 06:02:06.649536 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c9k89" podStartSLOduration=49.649510622 podStartE2EDuration="49.649510622s" podCreationTimestamp="2025-07-07 06:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:05.467046629 +0000 UTC m=+53.548294629" watchObservedRunningTime="2025-07-07 06:02:06.649510622 +0000 UTC m=+54.730758622" Jul 7 06:02:06.704001 kubelet[2717]: I0707 06:02:06.703536 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kgrgn" podStartSLOduration=49.703509541 podStartE2EDuration="49.703509541s" podCreationTimestamp="2025-07-07 06:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:02:06.650484049 +0000 UTC m=+54.731732049" watchObservedRunningTime="2025-07-07 06:02:06.703509541 +0000 UTC m=+54.784757541" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.223 [INFO][4383] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.248 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0 calico-apiserver-f8b465d78- calico-apiserver 8a00075b-b572-4b57-8401-3b70beb6e654 878 0 2025-07-07 06:01:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8b465d78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f8b465d78-fb445 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb41991d068 [] [] }} ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.248 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.437 [INFO][4407] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" HandleID="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Workload="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.438 [INFO][4407] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" HandleID="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Workload="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f8b465d78-fb445", "timestamp":"2025-07-07 06:02:04.437659671 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.439 [INFO][4407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:04.937 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.245 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.455 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.716 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.718 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.723 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.723 [INFO][4407] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:05.741 [INFO][4407] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:06.112 [INFO][4407] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:06.392 [INFO][4407] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:06.392 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" host="localhost" Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:06.392 [INFO][4407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:06.709437 containerd[1605]: 2025-07-07 06:02:06.392 [INFO][4407] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" HandleID="k8s-pod-network.20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Workload="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.396 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0", GenerateName:"calico-apiserver-f8b465d78-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a00075b-b572-4b57-8401-3b70beb6e654", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8b465d78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f8b465d78-fb445", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb41991d068", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.396 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.396 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb41991d068 ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.401 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.401 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0", GenerateName:"calico-apiserver-f8b465d78-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a00075b-b572-4b57-8401-3b70beb6e654", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8b465d78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c", Pod:"calico-apiserver-f8b465d78-fb445", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb41991d068", MAC:"6e:04:ab:09:57:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:06.711044 containerd[1605]: 2025-07-07 06:02:06.704 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-fb445" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--fb445-eth0" Jul 7 06:02:06.711230 sshd[4678]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:06.714589 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:06.728268 systemd-logind[1584]: New session 9 of user core. Jul 7 06:02:06.738092 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:02:06.935966 sshd[4725]: Connection closed by 10.0.0.1 port 34524 Jul 7 06:02:06.937249 sshd-session[4678]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:06.942247 systemd[1]: sshd@8-10.0.0.17:22-10.0.0.1:34524.service: Deactivated successfully. Jul 7 06:02:06.946150 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:02:06.949473 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:02:06.951878 systemd-logind[1584]: Removed session 9. Jul 7 06:02:07.090691 kubelet[2717]: E0707 06:02:07.090397 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:07.344979 containerd[1605]: time="2025-07-07T06:02:07.344907793Z" level=info msg="connecting to shim d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1" address="unix:///run/containerd/s/08890097243be00770b0395b4fdb6293d5ea657fd541f9e25b4bb8f08e876b1c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:07.372132 systemd[1]: Started cri-containerd-d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1.scope - libcontainer container d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1. Jul 7 06:02:07.389223 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:07.580171 systemd-networkd[1504]: califb41991d068: Gained IPv6LL Jul 7 06:02:07.886389 containerd[1605]: time="2025-07-07T06:02:07.886324857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fh2n2,Uid:2274c4d9-26b1-4d81-aa7e-f0fcf833e24f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1\"" Jul 7 06:02:07.964226 systemd-networkd[1504]: vxlan.calico: Gained IPv6LL Jul 7 06:02:08.092280 kubelet[2717]: E0707 06:02:08.092241 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:08.276227 containerd[1605]: time="2025-07-07T06:02:08.276137209Z" level=info msg="connecting to shim 20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c" address="unix:///run/containerd/s/2e1db6319283b2a054a736ac81ab2e7795d507f3b8c357e542f1d62e3f4c14eb" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:08.318392 systemd[1]: Started cri-containerd-20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c.scope - libcontainer container 20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c. Jul 7 06:02:08.335730 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:08.416796 containerd[1605]: time="2025-07-07T06:02:08.416701282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-fb445,Uid:8a00075b-b572-4b57-8401-3b70beb6e654,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c\"" Jul 7 06:02:09.095484 kubelet[2717]: E0707 06:02:09.095419 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:09.752539 containerd[1605]: time="2025-07-07T06:02:09.751796594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:09.774044 containerd[1605]: time="2025-07-07T06:02:09.763074028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:02:09.799489 containerd[1605]: time="2025-07-07T06:02:09.799383646Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:09.865755 containerd[1605]: time="2025-07-07T06:02:09.865672814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:09.866670 containerd[1605]: time="2025-07-07T06:02:09.866606826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 4.789892205s" Jul 7 06:02:09.866809 containerd[1605]: time="2025-07-07T06:02:09.866669444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:02:09.867772 containerd[1605]: time="2025-07-07T06:02:09.867724635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:02:09.869189 containerd[1605]: time="2025-07-07T06:02:09.869154106Z" level=info msg="CreateContainer within sandbox \"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:02:10.184111 kubelet[2717]: E0707 06:02:10.184069 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:10.354890 containerd[1605]: time="2025-07-07T06:02:10.354801553Z" level=info msg="Container 7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:11.202115 containerd[1605]: time="2025-07-07T06:02:11.202028105Z" level=info msg="CreateContainer within sandbox \"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe\"" Jul 7 06:02:11.202741 containerd[1605]: time="2025-07-07T06:02:11.202715264Z" level=info msg="StartContainer for \"7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe\"" Jul 7 06:02:11.204050 containerd[1605]: time="2025-07-07T06:02:11.204022026Z" level=info msg="connecting to shim 7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe" address="unix:///run/containerd/s/689e473565d07175aed119be25e4d6607f8fcc1ea45f1dcd5d38307d4065d212" protocol=ttrpc version=3 Jul 7 06:02:11.229232 systemd[1]: Started cri-containerd-7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe.scope - libcontainer container 7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe. Jul 7 06:02:11.309906 containerd[1605]: time="2025-07-07T06:02:11.309850878Z" level=info msg="StartContainer for \"7eb19c25a7d56168c8f22f141d2adf6a0f64b366137c8f6694ec16c2957820fe\" returns successfully" Jul 7 06:02:11.947985 systemd[1]: Started sshd@9-10.0.0.17:22-10.0.0.1:33572.service - OpenSSH per-connection server daemon (10.0.0.1:33572). Jul 7 06:02:11.951244 containerd[1605]: time="2025-07-07T06:02:11.951188012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.952073 containerd[1605]: time="2025-07-07T06:02:11.952030783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:02:11.953440 containerd[1605]: time="2025-07-07T06:02:11.953403409Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.955819 containerd[1605]: time="2025-07-07T06:02:11.955748439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:11.956493 containerd[1605]: time="2025-07-07T06:02:11.956447211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.088687561s" Jul 7 06:02:11.956493 containerd[1605]: time="2025-07-07T06:02:11.956483949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:02:11.957656 containerd[1605]: time="2025-07-07T06:02:11.957629409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:02:11.959256 containerd[1605]: time="2025-07-07T06:02:11.959212700Z" level=info msg="CreateContainer within sandbox \"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:02:11.977667 containerd[1605]: time="2025-07-07T06:02:11.977593651Z" level=info msg="Container 322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:12.001220 containerd[1605]: time="2025-07-07T06:02:12.001148580Z" level=info msg="CreateContainer within sandbox \"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6\"" Jul 7 06:02:12.001934 containerd[1605]: time="2025-07-07T06:02:12.001834738Z" level=info msg="StartContainer for \"322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6\"" Jul 7 06:02:12.004358 containerd[1605]: time="2025-07-07T06:02:12.004326223Z" level=info msg="connecting to shim 322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6" address="unix:///run/containerd/s/ac3061e94d6b3fbf8383c5aece3fdb46ac6d80e47683427207adff5eee1344d0" protocol=ttrpc version=3 Jul 7 06:02:12.017477 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 33572 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:12.021374 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:12.034121 systemd[1]: Started cri-containerd-322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6.scope - libcontainer container 322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6. Jul 7 06:02:12.037259 systemd-logind[1584]: New session 10 of user core. Jul 7 06:02:12.049208 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:02:12.087585 containerd[1605]: time="2025-07-07T06:02:12.087495770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:02:12.088744 containerd[1605]: time="2025-07-07T06:02:12.088719827Z" level=info msg="StartContainer for \"322087992b9c50bc02f4e07699e530621bdda87fc7603c3cd8491804d67650b6\" returns successfully" Jul 7 06:02:12.215841 sshd[4918]: Connection closed by 10.0.0.1 port 33572 Jul 7 06:02:12.216181 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:12.224542 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:02:12.224981 systemd[1]: sshd@9-10.0.0.17:22-10.0.0.1:33572.service: Deactivated successfully. Jul 7 06:02:12.228158 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:02:12.230586 systemd-logind[1584]: Removed session 10. Jul 7 06:02:12.241757 systemd-networkd[1504]: cali41f54cfb01a: Link UP Jul 7 06:02:12.242626 systemd-networkd[1504]: cali41f54cfb01a: Gained carrier Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.153 [INFO][4933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0 calico-apiserver-f8b465d78- calico-apiserver 5d4134cc-058b-4bbe-98ee-e7795fa91c73 885 0 2025-07-07 06:01:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8b465d78 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f8b465d78-km6sq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41f54cfb01a [] [] }} ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.153 [INFO][4933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.191 [INFO][4957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" HandleID="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Workload="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.191 [INFO][4957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" HandleID="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Workload="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f8b465d78-km6sq", "timestamp":"2025-07-07 06:02:12.191264369 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.191 [INFO][4957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.191 [INFO][4957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.191 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.201 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.209 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.213 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.214 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.219 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.219 [INFO][4957] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.221 [INFO][4957] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627 Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.227 [INFO][4957] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.234 [INFO][4957] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.234 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" host="localhost" Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.234 [INFO][4957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:12.264127 containerd[1605]: 2025-07-07 06:02:12.234 [INFO][4957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" HandleID="k8s-pod-network.3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Workload="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.238 [INFO][4933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0", GenerateName:"calico-apiserver-f8b465d78-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4134cc-058b-4bbe-98ee-e7795fa91c73", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8b465d78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f8b465d78-km6sq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f54cfb01a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.238 [INFO][4933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.238 [INFO][4933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41f54cfb01a ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.242 [INFO][4933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.242 [INFO][4933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0", GenerateName:"calico-apiserver-f8b465d78-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d4134cc-058b-4bbe-98ee-e7795fa91c73", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8b465d78", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627", Pod:"calico-apiserver-f8b465d78-km6sq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41f54cfb01a", MAC:"96:b2:5f:5f:c9:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:12.265002 containerd[1605]: 2025-07-07 06:02:12.257 [INFO][4933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" Namespace="calico-apiserver" Pod="calico-apiserver-f8b465d78-km6sq" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8b465d78--km6sq-eth0" Jul 7 06:02:12.290771 containerd[1605]: time="2025-07-07T06:02:12.290709416Z" level=info msg="connecting to shim 3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627" address="unix:///run/containerd/s/e459b4d674d68587062028c5396df267fa2cba9aad7b62e469774577cfad294c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:12.321228 systemd[1]: Started cri-containerd-3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627.scope - libcontainer container 3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627. Jul 7 06:02:12.336277 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:12.372813 containerd[1605]: time="2025-07-07T06:02:12.372767688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8b465d78-km6sq,Uid:5d4134cc-058b-4bbe-98ee-e7795fa91c73,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627\"" Jul 7 06:02:13.532184 systemd-networkd[1504]: cali41f54cfb01a: Gained IPv6LL Jul 7 06:02:14.086767 containerd[1605]: time="2025-07-07T06:02:14.086690620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,}" Jul 7 06:02:15.031411 systemd-networkd[1504]: calic625b910f87: Link UP Jul 7 06:02:15.034069 systemd-networkd[1504]: calic625b910f87: Gained carrier Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.945 [INFO][5027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0 calico-kube-controllers-8485fd89d8- calico-system 2d455e57-85ea-4ea4-a603-79ef8a7c208e 889 0 2025-07-07 06:01:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8485fd89d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8485fd89d8-sc8rh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic625b910f87 [] [] }} ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.946 [INFO][5027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.980 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" HandleID="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Workload="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.980 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" HandleID="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Workload="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8485fd89d8-sc8rh", "timestamp":"2025-07-07 06:02:14.980207515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.980 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.980 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.980 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.988 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.994 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:14.999 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.004 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.007 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.007 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.009 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78 Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.014 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.024 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.024 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" host="localhost" Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.024 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:02:15.053533 containerd[1605]: 2025-07-07 06:02:15.024 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" HandleID="k8s-pod-network.c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Workload="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.028 [INFO][5027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0", GenerateName:"calico-kube-controllers-8485fd89d8-", Namespace:"calico-system", SelfLink:"", UID:"2d455e57-85ea-4ea4-a603-79ef8a7c208e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8485fd89d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8485fd89d8-sc8rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic625b910f87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.028 [INFO][5027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.028 [INFO][5027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic625b910f87 ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.032 [INFO][5027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.032 [INFO][5027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0", GenerateName:"calico-kube-controllers-8485fd89d8-", Namespace:"calico-system", SelfLink:"", UID:"2d455e57-85ea-4ea4-a603-79ef8a7c208e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8485fd89d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78", Pod:"calico-kube-controllers-8485fd89d8-sc8rh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic625b910f87", MAC:"f6:90:92:11:f0:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:02:15.054254 containerd[1605]: 2025-07-07 06:02:15.045 [INFO][5027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" Namespace="calico-system" Pod="calico-kube-controllers-8485fd89d8-sc8rh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8485fd89d8--sc8rh-eth0" Jul 7 06:02:15.096350 containerd[1605]: time="2025-07-07T06:02:15.096274059Z" level=info msg="connecting to shim c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78" address="unix:///run/containerd/s/3c0fbc7aa4b48473de2fd8f7cc2dca628719bacf45c8b332f76ccf8ecbb4aeba" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:02:15.128379 systemd[1]: Started cri-containerd-c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78.scope - libcontainer container c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78. Jul 7 06:02:15.148621 systemd-resolved[1414]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:02:15.279701 containerd[1605]: time="2025-07-07T06:02:15.279626575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8485fd89d8-sc8rh,Uid:2d455e57-85ea-4ea4-a603-79ef8a7c208e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78\"" Jul 7 06:02:15.351711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276486149.mount: Deactivated successfully. Jul 7 06:02:16.715453 containerd[1605]: time="2025-07-07T06:02:16.715397331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.796363 systemd-networkd[1504]: calic625b910f87: Gained IPv6LL Jul 7 06:02:16.817240 containerd[1605]: time="2025-07-07T06:02:16.817180615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:02:16.892804 containerd[1605]: time="2025-07-07T06:02:16.892701079Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.923793 containerd[1605]: time="2025-07-07T06:02:16.923712490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:16.924758 containerd[1605]: time="2025-07-07T06:02:16.924713742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.967052454s" Jul 7 06:02:16.924758 containerd[1605]: time="2025-07-07T06:02:16.924748058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:02:16.926422 containerd[1605]: time="2025-07-07T06:02:16.926102812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:02:16.928276 containerd[1605]: time="2025-07-07T06:02:16.928205259Z" level=info msg="CreateContainer within sandbox \"d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:02:16.940649 containerd[1605]: time="2025-07-07T06:02:16.940599581Z" level=info msg="Container 9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:16.950693 containerd[1605]: time="2025-07-07T06:02:16.950650924Z" level=info msg="CreateContainer within sandbox \"d5f3fc5dd6919c2a093d67e9f675f0f7bee3ee36b4a2e4edcaaed640151d96b1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\"" Jul 7 06:02:16.951346 containerd[1605]: time="2025-07-07T06:02:16.951299866Z" level=info msg="StartContainer for \"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\"" Jul 7 06:02:16.953783 containerd[1605]: time="2025-07-07T06:02:16.953755123Z" level=info msg="connecting to shim 9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded" address="unix:///run/containerd/s/08890097243be00770b0395b4fdb6293d5ea657fd541f9e25b4bb8f08e876b1c" protocol=ttrpc version=3 Jul 7 06:02:16.984216 systemd[1]: Started cri-containerd-9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded.scope - libcontainer container 9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded. Jul 7 06:02:17.165896 containerd[1605]: time="2025-07-07T06:02:17.165833466Z" level=info msg="StartContainer for \"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\" returns successfully" Jul 7 06:02:17.242247 systemd[1]: Started sshd@10-10.0.0.17:22-10.0.0.1:33586.service - OpenSSH per-connection server daemon (10.0.0.1:33586). Jul 7 06:02:17.312233 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 33586 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:17.314189 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:17.319491 systemd-logind[1584]: New session 11 of user core. Jul 7 06:02:17.328343 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:02:17.562904 sshd[5157]: Connection closed by 10.0.0.1 port 33586 Jul 7 06:02:17.563331 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:17.569030 systemd[1]: sshd@10-10.0.0.17:22-10.0.0.1:33586.service: Deactivated successfully. Jul 7 06:02:17.571597 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:02:17.572566 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:02:17.574699 systemd-logind[1584]: Removed session 11. Jul 7 06:02:18.257007 containerd[1605]: time="2025-07-07T06:02:18.256944523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\" id:\"1380624355646f15aaefc95ce866ff46cf16f65f10a05cff108ab1003ec79b57\" pid:5183 exit_status:1 exited_at:{seconds:1751868138 nanos:256458377}" Jul 7 06:02:18.271177 kubelet[2717]: I0707 06:02:18.271096 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-fh2n2" podStartSLOduration=38.233055424 podStartE2EDuration="47.271069614s" podCreationTimestamp="2025-07-07 06:01:31 +0000 UTC" firstStartedPulling="2025-07-07 06:02:07.887789936 +0000 UTC m=+55.969037936" lastFinishedPulling="2025-07-07 06:02:16.925804126 +0000 UTC m=+65.007052126" observedRunningTime="2025-07-07 06:02:18.27001862 +0000 UTC m=+66.351266640" watchObservedRunningTime="2025-07-07 06:02:18.271069614 +0000 UTC m=+66.352317614" Jul 7 06:02:19.258630 containerd[1605]: time="2025-07-07T06:02:19.258555317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\" id:\"a3770c8c54cb12d0124d52ebe0530c187c8b9082a2ee30a190cad82d4ca581c2\" pid:5212 exit_status:1 exited_at:{seconds:1751868139 nanos:258205975}" Jul 7 06:02:21.136606 containerd[1605]: time="2025-07-07T06:02:21.136548859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\" id:\"f24d1288db3326b907f87c1de37e867d773297cf754ec41c32433d92a74be92f\" pid:5266 exit_status:1 exited_at:{seconds:1751868141 nanos:136202223}" Jul 7 06:02:21.162117 containerd[1605]: time="2025-07-07T06:02:21.162062324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\" id:\"9020457452b620343748c5f1d6e3fee62e8db754baae9c79eafe4b4cfbbffc71\" pid:5240 exited_at:{seconds:1751868141 nanos:161539298}" Jul 7 06:02:21.451269 containerd[1605]: time="2025-07-07T06:02:21.451069609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.522647 containerd[1605]: time="2025-07-07T06:02:21.522578501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:02:21.577659 containerd[1605]: time="2025-07-07T06:02:21.577586411Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.592864 containerd[1605]: time="2025-07-07T06:02:21.592741694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:21.593593 containerd[1605]: time="2025-07-07T06:02:21.593488951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.667351411s" Jul 7 06:02:21.593593 containerd[1605]: time="2025-07-07T06:02:21.593529148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:02:21.595001 containerd[1605]: time="2025-07-07T06:02:21.594725357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:02:21.596347 containerd[1605]: time="2025-07-07T06:02:21.596300525Z" level=info msg="CreateContainer within sandbox \"20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:02:21.697699 containerd[1605]: time="2025-07-07T06:02:21.697624289Z" level=info msg="Container 3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:21.790292 containerd[1605]: time="2025-07-07T06:02:21.790243180Z" level=info msg="CreateContainer within sandbox \"20a9c164586bd81347f383bc8f34414b9f5a4ffc7542cd9d72a8a6f2424bb93c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de\"" Jul 7 06:02:21.790961 containerd[1605]: time="2025-07-07T06:02:21.790866688Z" level=info msg="StartContainer for \"3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de\"" Jul 7 06:02:21.792323 containerd[1605]: time="2025-07-07T06:02:21.792295244Z" level=info msg="connecting to shim 3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de" address="unix:///run/containerd/s/2e1db6319283b2a054a736ac81ab2e7795d507f3b8c357e542f1d62e3f4c14eb" protocol=ttrpc version=3 Jul 7 06:02:21.817586 systemd[1]: Started cri-containerd-3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de.scope - libcontainer container 3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de. Jul 7 06:02:21.883346 containerd[1605]: time="2025-07-07T06:02:21.883300362Z" level=info msg="StartContainer for \"3e93fd2a20a2e0a58d02bbfeb22d223c6d727efc44f7bd2ed6b2f0d3819700de\" returns successfully" Jul 7 06:02:22.344088 kubelet[2717]: I0707 06:02:22.344011 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8b465d78-fb445" podStartSLOduration=41.167928487 podStartE2EDuration="54.343974584s" podCreationTimestamp="2025-07-07 06:01:28 +0000 UTC" firstStartedPulling="2025-07-07 06:02:08.418343304 +0000 UTC m=+56.499591314" lastFinishedPulling="2025-07-07 06:02:21.594389391 +0000 UTC m=+69.675637411" observedRunningTime="2025-07-07 06:02:22.342878087 +0000 UTC m=+70.424126077" watchObservedRunningTime="2025-07-07 06:02:22.343974584 +0000 UTC m=+70.425222604" Jul 7 06:02:22.577742 systemd[1]: Started sshd@11-10.0.0.17:22-10.0.0.1:59716.service - OpenSSH per-connection server daemon (10.0.0.1:59716). Jul 7 06:02:22.660189 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 59716 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:22.662573 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:22.666750 systemd-logind[1584]: New session 12 of user core. Jul 7 06:02:22.678077 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:02:22.810526 sshd[5324]: Connection closed by 10.0.0.1 port 59716 Jul 7 06:02:22.810902 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:22.831593 systemd[1]: sshd@11-10.0.0.17:22-10.0.0.1:59716.service: Deactivated successfully. Jul 7 06:02:22.834475 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:02:22.836050 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:02:22.840527 systemd[1]: Started sshd@12-10.0.0.17:22-10.0.0.1:59724.service - OpenSSH per-connection server daemon (10.0.0.1:59724). Jul 7 06:02:22.842169 systemd-logind[1584]: Removed session 12. Jul 7 06:02:22.892732 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 59724 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:22.894438 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:22.899514 systemd-logind[1584]: New session 13 of user core. Jul 7 06:02:22.909121 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:02:23.137520 sshd[5341]: Connection closed by 10.0.0.1 port 59724 Jul 7 06:02:23.138194 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:23.155384 systemd[1]: sshd@12-10.0.0.17:22-10.0.0.1:59724.service: Deactivated successfully. Jul 7 06:02:23.159799 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:02:23.161262 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:02:23.170327 systemd[1]: Started sshd@13-10.0.0.17:22-10.0.0.1:59726.service - OpenSSH per-connection server daemon (10.0.0.1:59726). Jul 7 06:02:23.172959 systemd-logind[1584]: Removed session 13. Jul 7 06:02:23.220731 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 59726 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:23.222912 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:23.228233 systemd-logind[1584]: New session 14 of user core. Jul 7 06:02:23.236227 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:02:23.362579 sshd[5354]: Connection closed by 10.0.0.1 port 59726 Jul 7 06:02:23.363214 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:23.367644 systemd[1]: sshd@13-10.0.0.17:22-10.0.0.1:59726.service: Deactivated successfully. Jul 7 06:02:23.371338 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:02:23.372557 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:02:23.375904 systemd-logind[1584]: Removed session 14. Jul 7 06:02:25.105731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12708380.mount: Deactivated successfully. Jul 7 06:02:25.132570 containerd[1605]: time="2025-07-07T06:02:25.132483488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:25.133233 containerd[1605]: time="2025-07-07T06:02:25.133199831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:02:25.134715 containerd[1605]: time="2025-07-07T06:02:25.134676462Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:25.137182 containerd[1605]: time="2025-07-07T06:02:25.137108005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:25.137686 containerd[1605]: time="2025-07-07T06:02:25.137615117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.542838131s" Jul 7 06:02:25.137686 containerd[1605]: time="2025-07-07T06:02:25.137664201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:02:25.139348 containerd[1605]: time="2025-07-07T06:02:25.139058685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:02:25.140480 containerd[1605]: time="2025-07-07T06:02:25.140426288Z" level=info msg="CreateContainer within sandbox \"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:02:25.160163 containerd[1605]: time="2025-07-07T06:02:25.160116381Z" level=info msg="Container 7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:25.171248 containerd[1605]: time="2025-07-07T06:02:25.171171459Z" level=info msg="CreateContainer within sandbox \"9a25be4653871730c387a95d650ffc951297a622e9cdee666322e036e41888d5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202\"" Jul 7 06:02:25.171900 containerd[1605]: time="2025-07-07T06:02:25.171866010Z" level=info msg="StartContainer for \"7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202\"" Jul 7 06:02:25.173470 containerd[1605]: time="2025-07-07T06:02:25.173336961Z" level=info msg="connecting to shim 7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202" address="unix:///run/containerd/s/689e473565d07175aed119be25e4d6607f8fcc1ea45f1dcd5d38307d4065d212" protocol=ttrpc version=3 Jul 7 06:02:25.202089 systemd[1]: Started cri-containerd-7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202.scope - libcontainer container 7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202. Jul 7 06:02:25.345315 containerd[1605]: time="2025-07-07T06:02:25.345261360Z" level=info msg="StartContainer for \"7fcae9cefcf8732ca41da94b97109c9b6dfb932d7295aeca42f80fb0e4562202\" returns successfully" Jul 7 06:02:27.085690 kubelet[2717]: E0707 06:02:27.085632 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:27.388819 containerd[1605]: time="2025-07-07T06:02:27.388671265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:27.389705 containerd[1605]: time="2025-07-07T06:02:27.389678584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:02:27.391069 containerd[1605]: time="2025-07-07T06:02:27.390997069Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:27.394193 containerd[1605]: time="2025-07-07T06:02:27.394148023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:27.394884 containerd[1605]: time="2025-07-07T06:02:27.394817305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.255718895s" Jul 7 06:02:27.394884 containerd[1605]: time="2025-07-07T06:02:27.394870026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:02:27.395996 containerd[1605]: time="2025-07-07T06:02:27.395965435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:02:27.397321 containerd[1605]: time="2025-07-07T06:02:27.397218835Z" level=info msg="CreateContainer within sandbox \"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:02:27.410305 containerd[1605]: time="2025-07-07T06:02:27.409560812Z" level=info msg="Container a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:27.505161 containerd[1605]: time="2025-07-07T06:02:27.505087955Z" level=info msg="CreateContainer within sandbox \"df1876eefdbe5cfe454570ab418758d44a642192d99396a02d71b7e1f5e79bf5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29\"" Jul 7 06:02:27.506046 containerd[1605]: time="2025-07-07T06:02:27.505974432Z" level=info msg="StartContainer for \"a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29\"" Jul 7 06:02:27.508134 containerd[1605]: time="2025-07-07T06:02:27.508103310Z" level=info msg="connecting to shim a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29" address="unix:///run/containerd/s/ac3061e94d6b3fbf8383c5aece3fdb46ac6d80e47683427207adff5eee1344d0" protocol=ttrpc version=3 Jul 7 06:02:27.539269 systemd[1]: Started cri-containerd-a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29.scope - libcontainer container a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29. Jul 7 06:02:27.648341 containerd[1605]: time="2025-07-07T06:02:27.648214880Z" level=info msg="StartContainer for \"a6372d52582280364ae5197aa0183b01649d569b09e91a73b5883f3aaa17bb29\" returns successfully" Jul 7 06:02:28.172637 kubelet[2717]: I0707 06:02:28.172588 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:02:28.172637 kubelet[2717]: I0707 06:02:28.172632 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:02:28.301459 containerd[1605]: time="2025-07-07T06:02:28.301364438Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:28.333253 containerd[1605]: time="2025-07-07T06:02:28.333117157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:02:28.335482 containerd[1605]: time="2025-07-07T06:02:28.335441917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 939.442557ms" Jul 7 06:02:28.335482 containerd[1605]: time="2025-07-07T06:02:28.335473698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:02:28.337138 containerd[1605]: time="2025-07-07T06:02:28.337108246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:02:28.339408 kubelet[2717]: I0707 06:02:28.338477 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68db888878-2s6q7" podStartSLOduration=5.276265778 podStartE2EDuration="25.338455926s" podCreationTimestamp="2025-07-07 06:02:03 +0000 UTC" firstStartedPulling="2025-07-07 06:02:05.076458762 +0000 UTC m=+53.157706762" lastFinishedPulling="2025-07-07 06:02:25.13864891 +0000 UTC m=+73.219896910" observedRunningTime="2025-07-07 06:02:26.323094293 +0000 UTC m=+74.404342293" watchObservedRunningTime="2025-07-07 06:02:28.338455926 +0000 UTC m=+76.419703926" Jul 7 06:02:28.339408 kubelet[2717]: I0707 06:02:28.339266 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-q2hxc" podStartSLOduration=35.019994875 podStartE2EDuration="57.339257441s" podCreationTimestamp="2025-07-07 06:01:31 +0000 UTC" firstStartedPulling="2025-07-07 06:02:05.076492986 +0000 UTC m=+53.157740986" lastFinishedPulling="2025-07-07 06:02:27.395755552 +0000 UTC m=+75.477003552" observedRunningTime="2025-07-07 06:02:28.339234827 +0000 UTC m=+76.420482817" watchObservedRunningTime="2025-07-07 06:02:28.339257441 +0000 UTC m=+76.420505441" Jul 7 06:02:28.340904 containerd[1605]: time="2025-07-07T06:02:28.340827206Z" level=info msg="CreateContainer within sandbox \"3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:02:28.378890 systemd[1]: Started sshd@14-10.0.0.17:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Jul 7 06:02:28.512676 sshd[5458]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:28.514546 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:28.519600 systemd-logind[1584]: New session 15 of user core. Jul 7 06:02:28.534112 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:02:28.538260 containerd[1605]: time="2025-07-07T06:02:28.538213659Z" level=info msg="Container 95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:29.467538 sshd[5460]: Connection closed by 10.0.0.1 port 59740 Jul 7 06:02:29.467953 sshd-session[5458]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:29.472516 systemd[1]: sshd@14-10.0.0.17:22-10.0.0.1:59740.service: Deactivated successfully. Jul 7 06:02:29.475510 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:02:29.477401 containerd[1605]: time="2025-07-07T06:02:29.477341026Z" level=info msg="CreateContainer within sandbox \"3629ec030045ac1aec8c47a68cb3e1268d6aed5060b6ffd29f71bf25450e9627\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa\"" Jul 7 06:02:29.481631 containerd[1605]: time="2025-07-07T06:02:29.479158904Z" level=info msg="StartContainer for \"95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa\"" Jul 7 06:02:29.478304 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:02:29.481078 systemd-logind[1584]: Removed session 15. Jul 7 06:02:29.482333 containerd[1605]: time="2025-07-07T06:02:29.482297689Z" level=info msg="connecting to shim 95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa" address="unix:///run/containerd/s/e459b4d674d68587062028c5396df267fa2cba9aad7b62e469774577cfad294c" protocol=ttrpc version=3 Jul 7 06:02:29.511166 systemd[1]: Started cri-containerd-95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa.scope - libcontainer container 95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa. Jul 7 06:02:29.716601 containerd[1605]: time="2025-07-07T06:02:29.716553035Z" level=info msg="StartContainer for \"95e3e4969fd4e0a6b038636458844da21b78dc99f67f14d8de25afdd927319aa\" returns successfully" Jul 7 06:02:30.254064 kubelet[2717]: I0707 06:02:30.253761 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8b465d78-km6sq" podStartSLOduration=46.290361883 podStartE2EDuration="1m2.252470674s" podCreationTimestamp="2025-07-07 06:01:28 +0000 UTC" firstStartedPulling="2025-07-07 06:02:12.374209543 +0000 UTC m=+60.455457543" lastFinishedPulling="2025-07-07 06:02:28.336318334 +0000 UTC m=+76.417566334" observedRunningTime="2025-07-07 06:02:30.252388767 +0000 UTC m=+78.333636768" watchObservedRunningTime="2025-07-07 06:02:30.252470674 +0000 UTC m=+78.333718674" Jul 7 06:02:31.213137 kubelet[2717]: I0707 06:02:31.213053 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:02:32.994783 containerd[1605]: time="2025-07-07T06:02:32.994708359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:32.996018 containerd[1605]: time="2025-07-07T06:02:32.995979989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:02:32.997643 containerd[1605]: time="2025-07-07T06:02:32.997601105Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:33.000485 containerd[1605]: time="2025-07-07T06:02:33.000439528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:02:33.000943 containerd[1605]: time="2025-07-07T06:02:33.000879408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.663732376s" Jul 7 06:02:33.000943 containerd[1605]: time="2025-07-07T06:02:33.000945464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:02:33.018905 containerd[1605]: time="2025-07-07T06:02:33.018800646Z" level=info msg="CreateContainer within sandbox \"c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:02:33.029953 containerd[1605]: time="2025-07-07T06:02:33.027812652Z" level=info msg="Container a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:02:33.041569 containerd[1605]: time="2025-07-07T06:02:33.041494575Z" level=info msg="CreateContainer within sandbox \"c97b4c68111c5425c93188bffb77a5ed11ab39a80fcc9b133000a88d30e34f78\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\"" Jul 7 06:02:33.042460 containerd[1605]: time="2025-07-07T06:02:33.042398261Z" level=info msg="StartContainer for \"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\"" Jul 7 06:02:33.044944 containerd[1605]: time="2025-07-07T06:02:33.043985952Z" level=info msg="connecting to shim a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4" address="unix:///run/containerd/s/3c0fbc7aa4b48473de2fd8f7cc2dca628719bacf45c8b332f76ccf8ecbb4aeba" protocol=ttrpc version=3 Jul 7 06:02:33.078332 systemd[1]: Started cri-containerd-a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4.scope - libcontainer container a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4. Jul 7 06:02:33.238273 containerd[1605]: time="2025-07-07T06:02:33.238210249Z" level=info msg="StartContainer for \"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\" returns successfully" Jul 7 06:02:34.301894 containerd[1605]: time="2025-07-07T06:02:34.301839927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\" id:\"6f9b8facbf27ed37c791d51e17ba14f01d1035bedc811e13f0d11f6909ee2d0b\" pid:5579 exited_at:{seconds:1751868154 nanos:301129321}" Jul 7 06:02:34.317547 kubelet[2717]: I0707 06:02:34.317401 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8485fd89d8-sc8rh" podStartSLOduration=45.596170509 podStartE2EDuration="1m3.317374484s" podCreationTimestamp="2025-07-07 06:01:31 +0000 UTC" firstStartedPulling="2025-07-07 06:02:15.280901204 +0000 UTC m=+63.362149204" lastFinishedPulling="2025-07-07 06:02:33.002105179 +0000 UTC m=+81.083353179" observedRunningTime="2025-07-07 06:02:34.261834208 +0000 UTC m=+82.343082208" watchObservedRunningTime="2025-07-07 06:02:34.317374484 +0000 UTC m=+82.398622494" Jul 7 06:02:34.488174 systemd[1]: Started sshd@15-10.0.0.17:22-10.0.0.1:40446.service - OpenSSH per-connection server daemon (10.0.0.1:40446). Jul 7 06:02:34.565459 sshd[5590]: Accepted publickey for core from 10.0.0.1 port 40446 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:34.567672 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:34.575521 systemd-logind[1584]: New session 16 of user core. Jul 7 06:02:34.579092 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:02:35.045458 sshd[5593]: Connection closed by 10.0.0.1 port 40446 Jul 7 06:02:35.045841 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:35.054039 systemd[1]: sshd@15-10.0.0.17:22-10.0.0.1:40446.service: Deactivated successfully. Jul 7 06:02:35.055112 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:02:35.056581 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:02:35.058430 systemd-logind[1584]: Removed session 16. Jul 7 06:02:36.086287 kubelet[2717]: E0707 06:02:36.086190 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:39.896679 kubelet[2717]: I0707 06:02:39.896617 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:02:40.063430 systemd[1]: Started sshd@16-10.0.0.17:22-10.0.0.1:38750.service - OpenSSH per-connection server daemon (10.0.0.1:38750). Jul 7 06:02:40.116815 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 38750 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:40.119028 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:40.124235 systemd-logind[1584]: New session 17 of user core. Jul 7 06:02:40.135225 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:02:40.264303 sshd[5613]: Connection closed by 10.0.0.1 port 38750 Jul 7 06:02:40.264601 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:40.270265 systemd[1]: sshd@16-10.0.0.17:22-10.0.0.1:38750.service: Deactivated successfully. Jul 7 06:02:40.272810 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:02:40.273882 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:02:40.275565 systemd-logind[1584]: Removed session 17. Jul 7 06:02:45.284797 systemd[1]: Started sshd@17-10.0.0.17:22-10.0.0.1:38762.service - OpenSSH per-connection server daemon (10.0.0.1:38762). Jul 7 06:02:45.342060 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 38762 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:45.344250 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:45.349444 systemd-logind[1584]: New session 18 of user core. Jul 7 06:02:45.358168 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:02:45.488883 sshd[5628]: Connection closed by 10.0.0.1 port 38762 Jul 7 06:02:45.489238 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:45.503421 systemd[1]: sshd@17-10.0.0.17:22-10.0.0.1:38762.service: Deactivated successfully. Jul 7 06:02:45.505656 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:02:45.506737 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:02:45.510988 systemd[1]: Started sshd@18-10.0.0.17:22-10.0.0.1:38772.service - OpenSSH per-connection server daemon (10.0.0.1:38772). Jul 7 06:02:45.511770 systemd-logind[1584]: Removed session 18. Jul 7 06:02:45.558869 sshd[5642]: Accepted publickey for core from 10.0.0.1 port 38772 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:45.560682 sshd-session[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:45.566963 systemd-logind[1584]: New session 19 of user core. Jul 7 06:02:45.578306 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:02:46.042322 sshd[5644]: Connection closed by 10.0.0.1 port 38772 Jul 7 06:02:46.042753 sshd-session[5642]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:46.057643 systemd[1]: sshd@18-10.0.0.17:22-10.0.0.1:38772.service: Deactivated successfully. Jul 7 06:02:46.061001 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:02:46.062067 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:02:46.067790 systemd[1]: Started sshd@19-10.0.0.17:22-10.0.0.1:38786.service - OpenSSH per-connection server daemon (10.0.0.1:38786). Jul 7 06:02:46.069160 systemd-logind[1584]: Removed session 19. Jul 7 06:02:46.129007 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 38786 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:46.131504 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:46.137757 systemd-logind[1584]: New session 20 of user core. Jul 7 06:02:46.146391 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:02:48.378116 sshd[5657]: Connection closed by 10.0.0.1 port 38786 Jul 7 06:02:48.380818 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:48.392442 systemd[1]: sshd@19-10.0.0.17:22-10.0.0.1:38786.service: Deactivated successfully. Jul 7 06:02:48.398726 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:02:48.401031 systemd[1]: session-20.scope: Consumed 644ms CPU time, 73.6M memory peak. Jul 7 06:02:48.403757 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:02:48.413665 systemd[1]: Started sshd@20-10.0.0.17:22-10.0.0.1:38792.service - OpenSSH per-connection server daemon (10.0.0.1:38792). Jul 7 06:02:48.415992 systemd-logind[1584]: Removed session 20. Jul 7 06:02:48.497292 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 38792 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:48.499911 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:48.506262 systemd-logind[1584]: New session 21 of user core. Jul 7 06:02:48.521308 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:02:49.088763 kubelet[2717]: E0707 06:02:49.088464 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:49.322637 sshd[5688]: Connection closed by 10.0.0.1 port 38792 Jul 7 06:02:49.322984 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:49.333358 systemd[1]: sshd@20-10.0.0.17:22-10.0.0.1:38792.service: Deactivated successfully. Jul 7 06:02:49.335453 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:02:49.336254 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:02:49.340106 systemd[1]: Started sshd@21-10.0.0.17:22-10.0.0.1:38808.service - OpenSSH per-connection server daemon (10.0.0.1:38808). Jul 7 06:02:49.340948 systemd-logind[1584]: Removed session 21. Jul 7 06:02:49.390278 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 38808 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:49.392714 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:49.399414 systemd-logind[1584]: New session 22 of user core. Jul 7 06:02:49.406144 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:02:49.552795 sshd[5704]: Connection closed by 10.0.0.1 port 38808 Jul 7 06:02:49.553136 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:49.556707 systemd[1]: sshd@21-10.0.0.17:22-10.0.0.1:38808.service: Deactivated successfully. Jul 7 06:02:49.559583 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:02:49.562755 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:02:49.565043 systemd-logind[1584]: Removed session 22. Jul 7 06:02:49.829216 containerd[1605]: time="2025-07-07T06:02:49.829150840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1317027ccacdd278382f22095b0fd136395ad8272a7d5b826ed3d1b7c13cb4c9\" id:\"942d99d320d630517ec10736beda8550935deefcc4cd82e3d4ee91bf25c83368\" pid:5731 exited_at:{seconds:1751868169 nanos:828711757}" Jul 7 06:02:50.271904 containerd[1605]: time="2025-07-07T06:02:50.271746776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\" id:\"452dc5c770ee4916097d8a38b0b6c29838eb7754aa01bb2ddbc8bb87f454891a\" pid:5775 exited_at:{seconds:1751868170 nanos:241337669}" Jul 7 06:02:50.319671 containerd[1605]: time="2025-07-07T06:02:50.319623098Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ebff2c2e9391c7b3f8ed914cff3c8658f9ddd180240e545aa9ff0407c712ded\" id:\"2c20e29d33d8ed49ee33d328a8fbf967d915ab03518f66fbba1ba47105db6ef6\" pid:5771 exited_at:{seconds:1751868170 nanos:319277453}" Jul 7 06:02:51.085489 kubelet[2717]: E0707 06:02:51.085420 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:02:54.574011 systemd[1]: Started sshd@22-10.0.0.17:22-10.0.0.1:59510.service - OpenSSH per-connection server daemon (10.0.0.1:59510). Jul 7 06:02:54.670420 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 59510 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:54.673854 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:02:54.681234 systemd-logind[1584]: New session 23 of user core. Jul 7 06:02:54.687234 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:02:54.920283 sshd[5801]: Connection closed by 10.0.0.1 port 59510 Jul 7 06:02:54.922646 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Jul 7 06:02:54.928996 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:02:54.930855 systemd[1]: sshd@22-10.0.0.17:22-10.0.0.1:59510.service: Deactivated successfully. Jul 7 06:02:54.935309 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:02:54.939347 systemd-logind[1584]: Removed session 23. Jul 7 06:02:59.937553 systemd[1]: Started sshd@23-10.0.0.17:22-10.0.0.1:58462.service - OpenSSH per-connection server daemon (10.0.0.1:58462). Jul 7 06:02:59.996963 sshd[5817]: Accepted publickey for core from 10.0.0.1 port 58462 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:02:59.998963 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:00.007622 systemd-logind[1584]: New session 24 of user core. Jul 7 06:03:00.022684 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:03:00.178643 sshd[5819]: Connection closed by 10.0.0.1 port 58462 Jul 7 06:03:00.179050 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:00.184254 systemd[1]: sshd@23-10.0.0.17:22-10.0.0.1:58462.service: Deactivated successfully. Jul 7 06:03:00.187093 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:03:00.189037 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:03:00.190693 systemd-logind[1584]: Removed session 24. Jul 7 06:03:05.195157 systemd[1]: Started sshd@24-10.0.0.17:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). Jul 7 06:03:05.255297 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:03:05.258159 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:05.267830 systemd-logind[1584]: New session 25 of user core. Jul 7 06:03:05.272246 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:03:05.467954 sshd[5835]: Connection closed by 10.0.0.1 port 58472 Jul 7 06:03:05.467324 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:05.473601 systemd[1]: sshd@24-10.0.0.17:22-10.0.0.1:58472.service: Deactivated successfully. Jul 7 06:03:05.476429 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:03:05.477421 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:03:05.479251 systemd-logind[1584]: Removed session 25. Jul 7 06:03:10.484487 systemd[1]: Started sshd@25-10.0.0.17:22-10.0.0.1:48608.service - OpenSSH per-connection server daemon (10.0.0.1:48608). Jul 7 06:03:10.536513 sshd[5849]: Accepted publickey for core from 10.0.0.1 port 48608 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:03:10.538451 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:03:10.544173 systemd-logind[1584]: New session 26 of user core. Jul 7 06:03:10.551259 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:03:10.692248 sshd[5851]: Connection closed by 10.0.0.1 port 48608 Jul 7 06:03:10.693396 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Jul 7 06:03:10.700101 systemd[1]: sshd@25-10.0.0.17:22-10.0.0.1:48608.service: Deactivated successfully. Jul 7 06:03:10.703042 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:03:10.705631 systemd-logind[1584]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:03:10.709623 systemd-logind[1584]: Removed session 26. Jul 7 06:03:11.046970 containerd[1605]: time="2025-07-07T06:03:11.046899542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a9da0435acbb7f4a106fcb26b54e6b55d708dce110cac5c3e4f305017e6ed7c4\" id:\"503522f378f699272c2ad2493e5f6281ff817294969fb8e18fb21c1d3381ec0f\" pid:5875 exited_at:{seconds:1751868191 nanos:46069723}"