Jul 9 13:11:15.828085 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:38:39 -00 2025 Jul 9 13:11:15.828108 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:11:15.828119 kernel: BIOS-provided physical RAM map: Jul 9 13:11:15.828125 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 9 13:11:15.828132 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 9 13:11:15.828138 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 9 13:11:15.828146 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 9 13:11:15.828152 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 9 13:11:15.828164 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 9 13:11:15.828170 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 9 13:11:15.828177 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 9 13:11:15.828184 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 9 13:11:15.828192 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 9 13:11:15.828201 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 9 13:11:15.828214 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 9 13:11:15.828223 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 9 13:11:15.828232 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 9 13:11:15.828241 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 9 13:11:15.828250 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 9 13:11:15.828258 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 9 13:11:15.828265 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 9 13:11:15.828272 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 9 13:11:15.828279 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 9 13:11:15.828285 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 13:11:15.828292 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 9 13:11:15.828302 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 13:11:15.828309 kernel: NX (Execute Disable) protection: active Jul 9 13:11:15.828315 kernel: APIC: Static calls initialized Jul 9 13:11:15.828322 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 9 13:11:15.828329 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 9 13:11:15.828336 kernel: extended physical RAM map: Jul 9 13:11:15.828343 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 9 13:11:15.828350 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 9 13:11:15.828357 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 9 13:11:15.828364 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 9 13:11:15.828371 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 9 13:11:15.828380 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 9 13:11:15.828387 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 9 13:11:15.828393 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 9 13:11:15.828401 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 9 13:11:15.828411 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 9 13:11:15.828418 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 9 13:11:15.828427 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 9 13:11:15.828434 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 9 13:11:15.828442 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 9 13:11:15.828449 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 9 13:11:15.828456 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 9 13:11:15.828463 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 9 13:11:15.828470 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 9 13:11:15.828478 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 9 13:11:15.828485 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 9 13:11:15.828492 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 9 13:11:15.828501 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 9 13:11:15.828509 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 9 13:11:15.828516 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 9 13:11:15.828523 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 13:11:15.828530 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 9 13:11:15.828537 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 13:11:15.828547 kernel: efi: EFI v2.7 by EDK II Jul 9 13:11:15.828555 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 9 13:11:15.828562 kernel: random: crng init done Jul 9 13:11:15.828569 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 9 13:11:15.828577 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 9 13:11:15.828586 kernel: secureboot: Secure boot disabled Jul 9 13:11:15.828593 kernel: SMBIOS 2.8 present. Jul 9 13:11:15.828600 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 9 13:11:15.828607 kernel: DMI: Memory slots populated: 1/1 Jul 9 13:11:15.828614 kernel: Hypervisor detected: KVM Jul 9 13:11:15.828622 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 13:11:15.828629 kernel: kvm-clock: using sched offset of 4911027427 cycles Jul 9 13:11:15.828636 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 13:11:15.828644 kernel: tsc: Detected 2794.750 MHz processor Jul 9 13:11:15.828652 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 13:11:15.828659 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 13:11:15.828668 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 9 13:11:15.828676 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 9 13:11:15.828683 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 13:11:15.828691 kernel: Using GB pages for direct mapping Jul 9 13:11:15.828698 kernel: ACPI: Early table checksum verification disabled Jul 9 13:11:15.828705 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 9 13:11:15.828713 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 9 13:11:15.828720 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828728 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828737 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 9 13:11:15.828745 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828752 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828759 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828767 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 13:11:15.828782 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 9 13:11:15.828789 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 9 13:11:15.828796 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 9 13:11:15.828806 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 9 13:11:15.828813 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 9 13:11:15.828821 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 9 13:11:15.828828 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 9 13:11:15.828835 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 9 13:11:15.828843 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 9 13:11:15.828850 kernel: No NUMA configuration found Jul 9 13:11:15.828857 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 9 13:11:15.828865 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 9 13:11:15.828873 kernel: Zone ranges: Jul 9 13:11:15.828928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 13:11:15.828937 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 9 13:11:15.828944 kernel: Normal empty Jul 9 13:11:15.828960 kernel: Device empty Jul 9 13:11:15.828976 kernel: Movable zone start for each node Jul 9 13:11:15.828998 kernel: Early memory node ranges Jul 9 13:11:15.829007 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 9 13:11:15.829014 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 9 13:11:15.829022 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 9 13:11:15.829032 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 9 13:11:15.829040 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 9 13:11:15.829054 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 9 13:11:15.829069 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 9 13:11:15.829078 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 9 13:11:15.829100 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 9 13:11:15.829116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 13:11:15.829127 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 9 13:11:15.829144 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 9 13:11:15.829152 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 13:11:15.829159 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 9 13:11:15.829167 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 9 13:11:15.829177 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 9 13:11:15.829185 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 9 13:11:15.829192 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 9 13:11:15.829200 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 13:11:15.829207 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 13:11:15.829218 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 13:11:15.829225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 13:11:15.829233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 13:11:15.829241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 13:11:15.829248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 13:11:15.829256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 13:11:15.829264 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 13:11:15.829271 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 9 13:11:15.829279 kernel: TSC deadline timer available Jul 9 13:11:15.829289 kernel: CPU topo: Max. logical packages: 1 Jul 9 13:11:15.829296 kernel: CPU topo: Max. logical dies: 1 Jul 9 13:11:15.829304 kernel: CPU topo: Max. dies per package: 1 Jul 9 13:11:15.829311 kernel: CPU topo: Max. threads per core: 1 Jul 9 13:11:15.829319 kernel: CPU topo: Num. cores per package: 4 Jul 9 13:11:15.829326 kernel: CPU topo: Num. threads per package: 4 Jul 9 13:11:15.829334 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 9 13:11:15.829341 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 13:11:15.829349 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 9 13:11:15.829359 kernel: kvm-guest: setup PV sched yield Jul 9 13:11:15.829366 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 9 13:11:15.829374 kernel: Booting paravirtualized kernel on KVM Jul 9 13:11:15.829382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 13:11:15.829389 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 9 13:11:15.829397 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 9 13:11:15.829405 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 9 13:11:15.829412 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 9 13:11:15.829419 kernel: kvm-guest: PV spinlocks enabled Jul 9 13:11:15.829429 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 9 13:11:15.829438 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:11:15.829446 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 13:11:15.829454 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 13:11:15.829461 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 13:11:15.829469 kernel: Fallback order for Node 0: 0 Jul 9 13:11:15.829477 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 9 13:11:15.829484 kernel: Policy zone: DMA32 Jul 9 13:11:15.829492 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 13:11:15.829501 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 13:11:15.829509 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 13:11:15.829516 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 13:11:15.829524 kernel: Dynamic Preempt: voluntary Jul 9 13:11:15.829531 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 13:11:15.829540 kernel: rcu: RCU event tracing is enabled. Jul 9 13:11:15.829548 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 13:11:15.829555 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 13:11:15.829563 kernel: Rude variant of Tasks RCU enabled. Jul 9 13:11:15.829573 kernel: Tracing variant of Tasks RCU enabled. Jul 9 13:11:15.829581 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 13:11:15.829591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 13:11:15.829598 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:11:15.829606 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:11:15.829614 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 13:11:15.829622 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 9 13:11:15.829629 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 13:11:15.829637 kernel: Console: colour dummy device 80x25 Jul 9 13:11:15.829646 kernel: printk: legacy console [ttyS0] enabled Jul 9 13:11:15.829654 kernel: ACPI: Core revision 20240827 Jul 9 13:11:15.829661 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 9 13:11:15.829669 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 13:11:15.829677 kernel: x2apic enabled Jul 9 13:11:15.829684 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 13:11:15.829692 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 9 13:11:15.829700 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 9 13:11:15.829707 kernel: kvm-guest: setup PV IPIs Jul 9 13:11:15.829717 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 13:11:15.829725 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 9 13:11:15.829732 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 9 13:11:15.829740 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 9 13:11:15.829748 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 9 13:11:15.829755 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 9 13:11:15.829763 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 13:11:15.829771 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 13:11:15.829786 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 13:11:15.829795 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 9 13:11:15.829803 kernel: RETBleed: Mitigation: untrained return thunk Jul 9 13:11:15.829811 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 13:11:15.829819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 13:11:15.829827 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 9 13:11:15.829835 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 9 13:11:15.829843 kernel: x86/bugs: return thunk changed Jul 9 13:11:15.829850 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 9 13:11:15.829860 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 13:11:15.829867 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 13:11:15.829890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 13:11:15.829898 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 13:11:15.829906 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 13:11:15.829914 kernel: Freeing SMP alternatives memory: 32K Jul 9 13:11:15.829921 kernel: pid_max: default: 32768 minimum: 301 Jul 9 13:11:15.829929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 13:11:15.829936 kernel: landlock: Up and running. Jul 9 13:11:15.829946 kernel: SELinux: Initializing. Jul 9 13:11:15.829954 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 13:11:15.829962 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 13:11:15.829969 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 9 13:11:15.829977 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 9 13:11:15.829985 kernel: ... version: 0 Jul 9 13:11:15.829992 kernel: ... bit width: 48 Jul 9 13:11:15.830000 kernel: ... generic registers: 6 Jul 9 13:11:15.830008 kernel: ... value mask: 0000ffffffffffff Jul 9 13:11:15.830017 kernel: ... max period: 00007fffffffffff Jul 9 13:11:15.830025 kernel: ... fixed-purpose events: 0 Jul 9 13:11:15.830032 kernel: ... event mask: 000000000000003f Jul 9 13:11:15.830040 kernel: signal: max sigframe size: 1776 Jul 9 13:11:15.830047 kernel: rcu: Hierarchical SRCU implementation. Jul 9 13:11:15.830055 kernel: rcu: Max phase no-delay instances is 400. Jul 9 13:11:15.830065 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 13:11:15.830073 kernel: smp: Bringing up secondary CPUs ... Jul 9 13:11:15.830081 kernel: smpboot: x86: Booting SMP configuration: Jul 9 13:11:15.830090 kernel: .... node #0, CPUs: #1 #2 #3 Jul 9 13:11:15.830098 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 13:11:15.830111 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 9 13:11:15.830122 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54568K init, 2400K bss, 137196K reserved, 0K cma-reserved) Jul 9 13:11:15.830135 kernel: devtmpfs: initialized Jul 9 13:11:15.830148 kernel: x86/mm: Memory block size: 128MB Jul 9 13:11:15.830162 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 9 13:11:15.830170 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 9 13:11:15.830178 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 9 13:11:15.830187 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 9 13:11:15.830195 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 9 13:11:15.830203 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 9 13:11:15.830210 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 13:11:15.830218 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 13:11:15.830225 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 13:11:15.830233 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 13:11:15.830241 kernel: audit: initializing netlink subsys (disabled) Jul 9 13:11:15.830249 kernel: audit: type=2000 audit(1752066672.722:1): state=initialized audit_enabled=0 res=1 Jul 9 13:11:15.830260 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 13:11:15.830269 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 13:11:15.830278 kernel: cpuidle: using governor menu Jul 9 13:11:15.830285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 13:11:15.830293 kernel: dca service started, version 1.12.1 Jul 9 13:11:15.830301 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 9 13:11:15.830308 kernel: PCI: Using configuration type 1 for base access Jul 9 13:11:15.830316 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 13:11:15.830324 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 13:11:15.830333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 13:11:15.830341 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 13:11:15.830348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 13:11:15.830356 kernel: ACPI: Added _OSI(Module Device) Jul 9 13:11:15.830363 kernel: ACPI: Added _OSI(Processor Device) Jul 9 13:11:15.830371 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 13:11:15.830379 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 13:11:15.830396 kernel: ACPI: Interpreter enabled Jul 9 13:11:15.830412 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 13:11:15.830423 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 13:11:15.830431 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 13:11:15.830438 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 13:11:15.830446 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 9 13:11:15.830454 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 13:11:15.830674 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 13:11:15.830810 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 9 13:11:15.830964 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 9 13:11:15.830975 kernel: PCI host bridge to bus 0000:00 Jul 9 13:11:15.831111 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 13:11:15.831222 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 13:11:15.831336 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 13:11:15.831443 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 9 13:11:15.831562 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 9 13:11:15.831702 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 9 13:11:15.831867 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 13:11:15.832058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 9 13:11:15.832196 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 9 13:11:15.832317 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 9 13:11:15.832435 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 9 13:11:15.832558 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 9 13:11:15.832705 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 13:11:15.832859 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 13:11:15.833016 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 9 13:11:15.833136 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 9 13:11:15.833255 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 9 13:11:15.833392 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 9 13:11:15.833518 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 9 13:11:15.833638 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 9 13:11:15.833757 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 9 13:11:15.833936 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 9 13:11:15.834063 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 9 13:11:15.834182 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 9 13:11:15.834301 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 9 13:11:15.834424 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 9 13:11:15.834558 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 9 13:11:15.834780 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 9 13:11:15.834938 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 9 13:11:15.835059 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 9 13:11:15.835179 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 9 13:11:15.835319 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 9 13:11:15.835443 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 9 13:11:15.835453 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 13:11:15.835461 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 13:11:15.835469 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 13:11:15.835476 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 13:11:15.835484 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 9 13:11:15.835492 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 9 13:11:15.835499 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 9 13:11:15.835509 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 9 13:11:15.835517 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 9 13:11:15.835524 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 9 13:11:15.835532 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 9 13:11:15.835540 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 9 13:11:15.835547 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 9 13:11:15.835555 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 9 13:11:15.835562 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 9 13:11:15.835570 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 9 13:11:15.835580 kernel: iommu: Default domain type: Translated Jul 9 13:11:15.835587 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 13:11:15.835595 kernel: efivars: Registered efivars operations Jul 9 13:11:15.835602 kernel: PCI: Using ACPI for IRQ routing Jul 9 13:11:15.835610 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 13:11:15.835618 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 9 13:11:15.835625 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 9 13:11:15.835633 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 9 13:11:15.835640 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 9 13:11:15.835650 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 9 13:11:15.835657 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 9 13:11:15.835665 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 9 13:11:15.835672 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 9 13:11:15.835801 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 9 13:11:15.835937 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 9 13:11:15.836057 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 13:11:15.836067 kernel: vgaarb: loaded Jul 9 13:11:15.836078 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 9 13:11:15.836086 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 9 13:11:15.836094 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 13:11:15.836102 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 13:11:15.836110 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 13:11:15.836117 kernel: pnp: PnP ACPI init Jul 9 13:11:15.836285 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 9 13:11:15.836312 kernel: pnp: PnP ACPI: found 6 devices Jul 9 13:11:15.836324 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 13:11:15.836332 kernel: NET: Registered PF_INET protocol family Jul 9 13:11:15.836340 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 13:11:15.836348 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 13:11:15.836356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 13:11:15.836364 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 13:11:15.836372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 13:11:15.836380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 13:11:15.836388 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 13:11:15.836397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 13:11:15.836408 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 13:11:15.836419 kernel: NET: Registered PF_XDP protocol family Jul 9 13:11:15.836556 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 9 13:11:15.836679 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 9 13:11:15.836802 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 13:11:15.837305 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 13:11:15.837420 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 13:11:15.837540 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 9 13:11:15.837672 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 9 13:11:15.837814 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 9 13:11:15.837825 kernel: PCI: CLS 0 bytes, default 64 Jul 9 13:11:15.837833 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 9 13:11:15.837842 kernel: Initialise system trusted keyrings Jul 9 13:11:15.837851 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 13:11:15.837859 kernel: Key type asymmetric registered Jul 9 13:11:15.837870 kernel: Asymmetric key parser 'x509' registered Jul 9 13:11:15.838062 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 13:11:15.838072 kernel: io scheduler mq-deadline registered Jul 9 13:11:15.838083 kernel: io scheduler kyber registered Jul 9 13:11:15.838091 kernel: io scheduler bfq registered Jul 9 13:11:15.838099 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 13:11:15.838110 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 9 13:11:15.838118 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 9 13:11:15.838127 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 9 13:11:15.838138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 13:11:15.838149 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 13:11:15.838160 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 13:11:15.838168 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 13:11:15.838176 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 13:11:15.838326 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 13:11:15.838444 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 13:11:15.838455 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 13:11:15.838564 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T13:11:15 UTC (1752066675) Jul 9 13:11:15.838674 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 9 13:11:15.838685 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 13:11:15.838692 kernel: efifb: probing for efifb Jul 9 13:11:15.838701 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 9 13:11:15.838708 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 9 13:11:15.838719 kernel: efifb: scrolling: redraw Jul 9 13:11:15.838727 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 9 13:11:15.838736 kernel: Console: switching to colour frame buffer device 160x50 Jul 9 13:11:15.838744 kernel: fb0: EFI VGA frame buffer device Jul 9 13:11:15.838752 kernel: pstore: Using crash dump compression: deflate Jul 9 13:11:15.838760 kernel: pstore: Registered efi_pstore as persistent store backend Jul 9 13:11:15.838768 kernel: NET: Registered PF_INET6 protocol family Jul 9 13:11:15.838785 kernel: Segment Routing with IPv6 Jul 9 13:11:15.838793 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 13:11:15.838803 kernel: NET: Registered PF_PACKET protocol family Jul 9 13:11:15.838811 kernel: Key type dns_resolver registered Jul 9 13:11:15.838818 kernel: IPI shorthand broadcast: enabled Jul 9 13:11:15.838826 kernel: sched_clock: Marking stable (3863002418, 163045951)->(4087428697, -61380328) Jul 9 13:11:15.838834 kernel: registered taskstats version 1 Jul 9 13:11:15.838842 kernel: Loading compiled-in X.509 certificates Jul 9 13:11:15.838850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 8ba3d283fde4a005aa35ab9394afe8122b8a3878' Jul 9 13:11:15.838858 kernel: Demotion targets for Node 0: null Jul 9 13:11:15.838866 kernel: Key type .fscrypt registered Jul 9 13:11:15.838888 kernel: Key type fscrypt-provisioning registered Jul 9 13:11:15.838897 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 13:11:15.838905 kernel: ima: Allocated hash algorithm: sha1 Jul 9 13:11:15.838913 kernel: ima: No architecture policies found Jul 9 13:11:15.838920 kernel: clk: Disabling unused clocks Jul 9 13:11:15.838928 kernel: Warning: unable to open an initial console. Jul 9 13:11:15.838937 kernel: Freeing unused kernel image (initmem) memory: 54568K Jul 9 13:11:15.838945 kernel: Write protecting the kernel read-only data: 24576k Jul 9 13:11:15.838953 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 13:11:15.838963 kernel: Run /init as init process Jul 9 13:11:15.838971 kernel: with arguments: Jul 9 13:11:15.838979 kernel: /init Jul 9 13:11:15.838986 kernel: with environment: Jul 9 13:11:15.838994 kernel: HOME=/ Jul 9 13:11:15.839002 kernel: TERM=linux Jul 9 13:11:15.839010 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 13:11:15.839019 systemd[1]: Successfully made /usr/ read-only. Jul 9 13:11:15.839032 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:11:15.839042 systemd[1]: Detected virtualization kvm. Jul 9 13:11:15.839050 systemd[1]: Detected architecture x86-64. Jul 9 13:11:15.839058 systemd[1]: Running in initrd. Jul 9 13:11:15.839068 systemd[1]: No hostname configured, using default hostname. Jul 9 13:11:15.839077 systemd[1]: Hostname set to . Jul 9 13:11:15.839085 systemd[1]: Initializing machine ID from VM UUID. Jul 9 13:11:15.839093 systemd[1]: Queued start job for default target initrd.target. Jul 9 13:11:15.839104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:11:15.839112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:11:15.839121 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 13:11:15.839130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:11:15.839138 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 13:11:15.839147 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 13:11:15.839157 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 13:11:15.839168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 13:11:15.839176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:11:15.839185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:11:15.839193 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:11:15.839201 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:11:15.839210 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:11:15.839218 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:11:15.839226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:11:15.839237 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:11:15.839245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 13:11:15.839254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 13:11:15.839262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:11:15.839271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:11:15.839279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:11:15.839288 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:11:15.839296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 13:11:15.839305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:11:15.839315 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 13:11:15.839324 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 13:11:15.839332 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 13:11:15.839340 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:11:15.839349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:11:15.839357 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:11:15.839366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 13:11:15.839377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:11:15.839385 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 13:11:15.839419 systemd-journald[220]: Collecting audit messages is disabled. Jul 9 13:11:15.839442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 13:11:15.839453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:15.839461 systemd-journald[220]: Journal started Jul 9 13:11:15.839481 systemd-journald[220]: Runtime Journal (/run/log/journal/39937b50828e475880f941cc3ba1c5f5) is 6M, max 48.5M, 42.4M free. Jul 9 13:11:15.830454 systemd-modules-load[222]: Inserted module 'overlay' Jul 9 13:11:15.841623 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:11:15.846986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 13:11:15.849578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:11:15.858899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 13:11:15.860742 systemd-modules-load[222]: Inserted module 'br_netfilter' Jul 9 13:11:15.861646 kernel: Bridge firewalling registered Jul 9 13:11:15.861741 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:11:15.862789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:11:15.864996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:11:15.866266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:11:15.876116 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 13:11:15.879514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:11:15.879906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:11:15.882632 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:11:15.885997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:11:15.888963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:11:15.891912 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 13:11:15.914184 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 13:11:15.934623 systemd-resolved[258]: Positive Trust Anchors: Jul 9 13:11:15.934907 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:11:15.934936 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:11:15.937834 systemd-resolved[258]: Defaulting to hostname 'linux'. Jul 9 13:11:15.939190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:11:15.944674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:11:16.060083 kernel: SCSI subsystem initialized Jul 9 13:11:16.068900 kernel: Loading iSCSI transport class v2.0-870. Jul 9 13:11:16.079910 kernel: iscsi: registered transport (tcp) Jul 9 13:11:16.102978 kernel: iscsi: registered transport (qla4xxx) Jul 9 13:11:16.103021 kernel: QLogic iSCSI HBA Driver Jul 9 13:11:16.125374 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:11:16.156593 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:11:16.157693 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:11:16.219634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 13:11:16.222187 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 13:11:16.283936 kernel: raid6: avx2x4 gen() 30149 MB/s Jul 9 13:11:16.300912 kernel: raid6: avx2x2 gen() 30309 MB/s Jul 9 13:11:16.317952 kernel: raid6: avx2x1 gen() 25800 MB/s Jul 9 13:11:16.317977 kernel: raid6: using algorithm avx2x2 gen() 30309 MB/s Jul 9 13:11:16.335955 kernel: raid6: .... xor() 19833 MB/s, rmw enabled Jul 9 13:11:16.335989 kernel: raid6: using avx2x2 recovery algorithm Jul 9 13:11:16.356909 kernel: xor: automatically using best checksumming function avx Jul 9 13:11:16.527938 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 13:11:16.537782 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:11:16.540949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:11:16.577386 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 9 13:11:16.583409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:11:16.585473 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 13:11:16.609447 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jul 9 13:11:16.642165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:11:16.645776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:11:16.725700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:11:16.729075 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 13:11:16.767900 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 9 13:11:16.774455 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 13:11:16.774682 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 13:11:16.778272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 13:11:16.778305 kernel: GPT:9289727 != 19775487 Jul 9 13:11:16.778316 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 13:11:16.778326 kernel: GPT:9289727 != 19775487 Jul 9 13:11:16.779200 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 13:11:16.779222 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:11:16.787896 kernel: AES CTR mode by8 optimization enabled Jul 9 13:11:16.800009 kernel: libata version 3.00 loaded. Jul 9 13:11:16.806357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:11:16.806484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:16.811359 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:11:16.815334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:11:16.818644 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:11:16.827897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 9 13:11:16.831912 kernel: ahci 0000:00:1f.2: version 3.0 Jul 9 13:11:16.834714 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 9 13:11:16.834739 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 9 13:11:16.834935 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 9 13:11:16.835077 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 9 13:11:16.837463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:11:16.840445 kernel: scsi host0: ahci Jul 9 13:11:16.840657 kernel: scsi host1: ahci Jul 9 13:11:16.838501 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:16.844695 kernel: scsi host2: ahci Jul 9 13:11:16.844940 kernel: scsi host3: ahci Jul 9 13:11:16.847237 kernel: scsi host4: ahci Jul 9 13:11:16.845439 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:11:16.854332 kernel: scsi host5: ahci Jul 9 13:11:16.854537 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 9 13:11:16.854550 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 9 13:11:16.856090 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 9 13:11:16.856112 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 9 13:11:16.858673 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 9 13:11:16.858696 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 9 13:11:16.874854 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 13:11:16.884223 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 13:11:16.893685 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 13:11:16.901483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 13:11:16.901562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 13:11:16.907062 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 13:11:16.907761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:11:16.938132 disk-uuid[632]: Primary Header is updated. Jul 9 13:11:16.938132 disk-uuid[632]: Secondary Entries is updated. Jul 9 13:11:16.938132 disk-uuid[632]: Secondary Header is updated. Jul 9 13:11:16.942078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:11:16.946912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:11:16.950163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:17.165112 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 9 13:11:17.165166 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 9 13:11:17.165179 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 9 13:11:17.165917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 9 13:11:17.166902 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 9 13:11:17.167907 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 9 13:11:17.169047 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 9 13:11:17.169090 kernel: ata3.00: applying bridge limits Jul 9 13:11:17.169102 kernel: ata3.00: configured for UDMA/100 Jul 9 13:11:17.170943 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 9 13:11:17.231480 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 9 13:11:17.231799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 13:11:17.251907 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 13:11:17.675076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 13:11:17.675828 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:11:17.678435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:11:17.678655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:11:17.680128 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 13:11:17.710171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:11:17.949123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 13:11:17.949203 disk-uuid[635]: The operation has completed successfully. Jul 9 13:11:17.980030 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 13:11:17.980153 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 13:11:18.011858 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 13:11:18.042465 sh[666]: Success Jul 9 13:11:18.063850 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 13:11:18.063918 kernel: device-mapper: uevent: version 1.0.3 Jul 9 13:11:18.063932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 13:11:18.115904 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 9 13:11:18.148999 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 13:11:18.151274 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 13:11:18.175724 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 13:11:18.182653 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 13:11:18.182680 kernel: BTRFS: device fsid 082bcfbc-2c86-46fe-87f4-85dea5450235 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (678) Jul 9 13:11:18.185460 kernel: BTRFS info (device dm-0): first mount of filesystem 082bcfbc-2c86-46fe-87f4-85dea5450235 Jul 9 13:11:18.185482 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:11:18.185494 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 13:11:18.189981 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 13:11:18.190501 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:11:18.191688 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 13:11:18.195394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 13:11:18.197809 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 13:11:18.239914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Jul 9 13:11:18.242437 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:11:18.242487 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:11:18.242508 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:11:18.250909 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:11:18.251614 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 13:11:18.255227 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 13:11:18.478589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:11:18.483221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:11:18.495275 ignition[750]: Ignition 2.21.0 Jul 9 13:11:18.495294 ignition[750]: Stage: fetch-offline Jul 9 13:11:18.495330 ignition[750]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:18.495340 ignition[750]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:18.495449 ignition[750]: parsed url from cmdline: "" Jul 9 13:11:18.495453 ignition[750]: no config URL provided Jul 9 13:11:18.495458 ignition[750]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 13:11:18.495467 ignition[750]: no config at "/usr/lib/ignition/user.ign" Jul 9 13:11:18.495500 ignition[750]: op(1): [started] loading QEMU firmware config module Jul 9 13:11:18.495505 ignition[750]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 13:11:18.504535 ignition[750]: op(1): [finished] loading QEMU firmware config module Jul 9 13:11:18.534132 systemd-networkd[853]: lo: Link UP Jul 9 13:11:18.534143 systemd-networkd[853]: lo: Gained carrier Jul 9 13:11:18.535935 systemd-networkd[853]: Enumeration completed Jul 9 13:11:18.536453 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:11:18.536457 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 13:11:18.537582 systemd-networkd[853]: eth0: Link UP Jul 9 13:11:18.537587 systemd-networkd[853]: eth0: Gained carrier Jul 9 13:11:18.537594 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:11:18.537755 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:11:18.540817 systemd[1]: Reached target network.target - Network. Jul 9 13:11:18.551949 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 13:11:18.556611 ignition[750]: parsing config with SHA512: 2ef0ba9f0d8578d915be421e5c719a389c9b90511c2cb17985f810df7521dd77f34ca78926e76c1bc94189f38d18114f42b6bf3f686cc85184f81af244a299a2 Jul 9 13:11:18.592371 unknown[750]: fetched base config from "system" Jul 9 13:11:18.592384 unknown[750]: fetched user config from "qemu" Jul 9 13:11:18.592749 ignition[750]: fetch-offline: fetch-offline passed Jul 9 13:11:18.592803 ignition[750]: Ignition finished successfully Jul 9 13:11:18.596129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:11:18.598609 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 13:11:18.599608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 13:11:18.655974 ignition[862]: Ignition 2.21.0 Jul 9 13:11:18.655990 ignition[862]: Stage: kargs Jul 9 13:11:18.656133 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:18.656144 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:18.660240 ignition[862]: kargs: kargs passed Jul 9 13:11:18.660308 ignition[862]: Ignition finished successfully Jul 9 13:11:18.665070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 13:11:18.668163 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 13:11:18.717576 ignition[870]: Ignition 2.21.0 Jul 9 13:11:18.717592 ignition[870]: Stage: disks Jul 9 13:11:18.717760 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:18.717771 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:18.722769 ignition[870]: disks: disks passed Jul 9 13:11:18.722891 ignition[870]: Ignition finished successfully Jul 9 13:11:18.726345 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 13:11:18.728555 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 13:11:18.729671 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 13:11:18.731826 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:11:18.733959 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:11:18.735840 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:11:18.738727 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 13:11:18.768379 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 13:11:18.777495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 13:11:18.779123 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 13:11:18.914912 kernel: EXT4-fs (vda9): mounted filesystem b08a603c-44fa-43af-af80-90bed9b8770a r/w with ordered data mode. Quota mode: none. Jul 9 13:11:18.915351 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 13:11:18.917701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 13:11:18.921051 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:11:18.923513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 13:11:18.925437 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 13:11:18.925484 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 13:11:18.925506 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:11:18.937426 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 13:11:18.938961 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 13:11:18.944720 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Jul 9 13:11:18.944754 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:11:18.944766 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:11:18.945555 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:11:18.950454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:11:18.980079 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 13:11:18.985308 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Jul 9 13:11:18.990224 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 13:11:18.995028 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 13:11:19.087629 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 13:11:19.090024 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 13:11:19.090764 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 13:11:19.119926 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:11:19.141095 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 13:11:19.163872 ignition[1002]: INFO : Ignition 2.21.0 Jul 9 13:11:19.163872 ignition[1002]: INFO : Stage: mount Jul 9 13:11:19.165526 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:19.165526 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:19.168046 ignition[1002]: INFO : mount: mount passed Jul 9 13:11:19.168795 ignition[1002]: INFO : Ignition finished successfully Jul 9 13:11:19.172336 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 13:11:19.175357 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 13:11:19.182099 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 13:11:19.207890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 13:11:19.233906 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Jul 9 13:11:19.236399 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 13:11:19.236417 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 13:11:19.236428 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 13:11:19.240075 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 13:11:19.276408 ignition[1031]: INFO : Ignition 2.21.0 Jul 9 13:11:19.276408 ignition[1031]: INFO : Stage: files Jul 9 13:11:19.278313 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:19.278313 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:19.280659 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Jul 9 13:11:19.281767 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 13:11:19.281767 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 13:11:19.284894 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 13:11:19.284894 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 13:11:19.284894 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 13:11:19.284276 unknown[1031]: wrote ssh authorized keys file for user: core Jul 9 13:11:19.290701 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 9 13:11:19.290701 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 9 13:11:19.383417 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 13:11:19.951024 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 9 13:11:19.951024 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:11:19.955073 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 9 13:11:19.967283 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 9 13:11:20.398168 systemd-networkd[853]: eth0: Gained IPv6LL Jul 9 13:11:20.699277 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 13:11:21.059501 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 9 13:11:21.059501 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 13:11:21.063265 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:11:21.068946 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 13:11:21.068946 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 13:11:21.068946 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 9 13:11:21.073410 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:11:21.075288 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 13:11:21.075288 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 9 13:11:21.078272 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 13:11:21.096254 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:11:21.101462 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 13:11:21.103164 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 13:11:21.103164 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 9 13:11:21.105829 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 13:11:21.105829 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:11:21.105829 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 13:11:21.105829 ignition[1031]: INFO : files: files passed Jul 9 13:11:21.105829 ignition[1031]: INFO : Ignition finished successfully Jul 9 13:11:21.111732 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 13:11:21.114418 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 13:11:21.117299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 13:11:21.130723 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 13:11:21.130992 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 13:11:21.134081 initrd-setup-root-after-ignition[1060]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 13:11:21.137855 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:11:21.137855 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:11:21.141083 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 13:11:21.142900 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:11:21.144598 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 13:11:21.146271 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 13:11:21.201643 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 13:11:21.201780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 13:11:21.205104 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 13:11:21.205183 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 13:11:21.208859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 13:11:21.210458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 13:11:21.247936 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:11:21.250612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 13:11:21.279209 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:11:21.279361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:11:21.282541 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 13:11:21.283654 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 13:11:21.283779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 13:11:21.288240 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 13:11:21.288377 systemd[1]: Stopped target basic.target - Basic System. Jul 9 13:11:21.290242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 13:11:21.290508 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 13:11:21.290833 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 13:11:21.291308 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 13:11:21.291631 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 13:11:21.291960 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 13:11:21.292425 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 13:11:21.292761 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 13:11:21.293213 systemd[1]: Stopped target swap.target - Swaps. Jul 9 13:11:21.293498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 13:11:21.293605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 13:11:21.309477 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:11:21.310554 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:11:21.310842 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 13:11:21.311293 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:11:21.314441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 13:11:21.314548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 13:11:21.316717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 13:11:21.316839 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 13:11:21.319434 systemd[1]: Stopped target paths.target - Path Units. Jul 9 13:11:21.319664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 13:11:21.325980 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:11:21.328650 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 13:11:21.328792 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 13:11:21.330410 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 13:11:21.330506 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 13:11:21.332961 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 13:11:21.333048 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 13:11:21.333990 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 13:11:21.334109 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 13:11:21.334429 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 13:11:21.334528 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 13:11:21.340090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 13:11:21.344583 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 13:11:21.345474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 13:11:21.345593 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:11:21.347541 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 13:11:21.347648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 13:11:21.353530 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 13:11:21.355062 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 13:11:21.369108 ignition[1087]: INFO : Ignition 2.21.0 Jul 9 13:11:21.369108 ignition[1087]: INFO : Stage: umount Jul 9 13:11:21.371288 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 13:11:21.371288 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 13:11:21.373411 ignition[1087]: INFO : umount: umount passed Jul 9 13:11:21.373411 ignition[1087]: INFO : Ignition finished successfully Jul 9 13:11:21.377408 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 13:11:21.378092 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 13:11:21.378210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 13:11:21.380155 systemd[1]: Stopped target network.target - Network. Jul 9 13:11:21.381908 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 13:11:21.381988 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 13:11:21.382740 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 13:11:21.382788 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 13:11:21.383193 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 13:11:21.383241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 13:11:21.383852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 13:11:21.383908 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 13:11:21.384482 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 13:11:21.389999 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 13:11:21.397499 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 13:11:21.397676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 13:11:21.402703 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 13:11:21.403013 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 13:11:21.403139 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 13:11:21.406794 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 13:11:21.407586 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 13:11:21.410159 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 13:11:21.410224 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:11:21.413301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 13:11:21.413374 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 13:11:21.413425 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 13:11:21.413768 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 13:11:21.413823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:11:21.421570 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 13:11:21.421651 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 13:11:21.423798 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 13:11:21.423847 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:11:21.426928 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:11:21.431544 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 13:11:21.431627 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:11:21.439431 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 13:11:21.439598 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 13:11:21.443840 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 13:11:21.444066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:11:21.445118 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 13:11:21.445167 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 13:11:21.447104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 13:11:21.447139 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:11:21.447390 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 13:11:21.447434 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 13:11:21.448170 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 13:11:21.448219 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 13:11:21.448824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 13:11:21.448869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 13:11:21.458529 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 13:11:21.459473 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 13:11:21.459527 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:11:21.464681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 13:11:21.464737 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:11:21.469142 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 13:11:21.469195 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:11:21.472653 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 13:11:21.472704 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:11:21.475799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 13:11:21.475854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:21.479990 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 13:11:21.480055 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 13:11:21.480107 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 13:11:21.480156 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 13:11:21.496051 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 13:11:21.496170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 13:11:21.606162 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 13:11:21.606309 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 13:11:21.607595 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 13:11:21.609919 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 13:11:21.609982 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 13:11:21.613858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 13:11:21.635281 systemd[1]: Switching root. Jul 9 13:11:21.673439 systemd-journald[220]: Journal stopped Jul 9 13:11:22.895540 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 9 13:11:22.895620 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 13:11:22.895635 kernel: SELinux: policy capability open_perms=1 Jul 9 13:11:22.895646 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 13:11:22.895663 kernel: SELinux: policy capability always_check_network=0 Jul 9 13:11:22.895674 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 13:11:22.895716 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 13:11:22.895734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 13:11:22.895749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 13:11:22.895762 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 13:11:22.895775 kernel: audit: type=1403 audit(1752066682.125:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 13:11:22.895791 systemd[1]: Successfully loaded SELinux policy in 59.344ms. Jul 9 13:11:22.895818 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.288ms. Jul 9 13:11:22.895835 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 13:11:22.895851 systemd[1]: Detected virtualization kvm. Jul 9 13:11:22.895869 systemd[1]: Detected architecture x86-64. Jul 9 13:11:22.895904 systemd[1]: Detected first boot. Jul 9 13:11:22.895922 systemd[1]: Initializing machine ID from VM UUID. Jul 9 13:11:22.895935 zram_generator::config[1134]: No configuration found. Jul 9 13:11:22.895958 kernel: Guest personality initialized and is inactive Jul 9 13:11:22.895976 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 13:11:22.895991 kernel: Initialized host personality Jul 9 13:11:22.896005 kernel: NET: Registered PF_VSOCK protocol family Jul 9 13:11:22.896026 systemd[1]: Populated /etc with preset unit settings. Jul 9 13:11:22.896043 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 13:11:22.896059 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 13:11:22.896077 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 13:11:22.896093 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 13:11:22.896111 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 13:11:22.896128 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 13:11:22.896144 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 13:11:22.896160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 13:11:22.896177 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 13:11:22.896189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 13:11:22.896202 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 13:11:22.896214 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 13:11:22.896226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 13:11:22.896247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 13:11:22.896263 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 13:11:22.896279 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 13:11:22.896300 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 13:11:22.896318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 13:11:22.896334 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 13:11:22.896349 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 13:11:22.896365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 13:11:22.896381 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 13:11:22.896396 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 13:11:22.896411 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 13:11:22.896436 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 13:11:22.896454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 13:11:22.896471 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 13:11:22.896487 systemd[1]: Reached target slices.target - Slice Units. Jul 9 13:11:22.896502 systemd[1]: Reached target swap.target - Swaps. Jul 9 13:11:22.896518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 13:11:22.896534 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 13:11:22.896550 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 13:11:22.896566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 13:11:22.896595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 13:11:22.896612 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 13:11:22.896627 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 13:11:22.896643 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 13:11:22.896658 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 13:11:22.896673 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 13:11:22.896689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:22.896704 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 13:11:22.896719 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 13:11:22.896737 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 13:11:22.896752 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 13:11:22.896768 systemd[1]: Reached target machines.target - Containers. Jul 9 13:11:22.896783 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 13:11:22.896802 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:11:22.896817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 13:11:22.896832 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 13:11:22.896848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:11:22.896863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:11:22.896899 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:11:22.896915 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 13:11:22.896931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:11:22.896968 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 13:11:22.896983 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 13:11:22.896999 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 13:11:22.897015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 13:11:22.897031 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 13:11:22.897050 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:11:22.897065 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 13:11:22.897081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 13:11:22.897096 kernel: loop: module loaded Jul 9 13:11:22.897111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 13:11:22.897126 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 13:11:22.897142 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 13:11:22.897161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 13:11:22.897179 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 13:11:22.897194 systemd[1]: Stopped verity-setup.service. Jul 9 13:11:22.897212 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:22.897226 kernel: fuse: init (API version 7.41) Jul 9 13:11:22.897242 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 13:11:22.897258 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 13:11:22.897302 systemd-journald[1205]: Collecting audit messages is disabled. Jul 9 13:11:22.897330 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 13:11:22.897346 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 13:11:22.897361 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 13:11:22.897381 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 13:11:22.897398 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 13:11:22.897415 systemd-journald[1205]: Journal started Jul 9 13:11:22.897445 systemd-journald[1205]: Runtime Journal (/run/log/journal/39937b50828e475880f941cc3ba1c5f5) is 6M, max 48.5M, 42.4M free. Jul 9 13:11:22.656815 systemd[1]: Queued start job for default target multi-user.target. Jul 9 13:11:22.678026 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 13:11:22.678501 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 13:11:22.898956 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 13:11:22.898980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 13:11:22.901759 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 13:11:22.903203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:11:22.903486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:11:22.904124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:11:22.904365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:11:22.905376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 13:11:22.905670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 13:11:22.906569 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:11:22.906792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:11:22.907425 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 13:11:22.908030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 13:11:22.908654 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 13:11:22.909619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 13:11:22.914913 kernel: ACPI: bus type drm_connector registered Jul 9 13:11:22.916639 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:11:22.918630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:11:22.926234 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 13:11:22.932424 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 13:11:22.935114 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 13:11:22.937445 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 13:11:22.938616 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 13:11:22.938651 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 13:11:22.940615 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 13:11:22.948265 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 13:11:22.950543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:11:22.952002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 13:11:22.953976 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 13:11:22.955210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:11:22.956698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 13:11:22.957817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:11:22.960194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 13:11:22.963013 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 13:11:22.969471 systemd-journald[1205]: Time spent on flushing to /var/log/journal/39937b50828e475880f941cc3ba1c5f5 is 22.139ms for 1068 entries. Jul 9 13:11:22.969471 systemd-journald[1205]: System Journal (/var/log/journal/39937b50828e475880f941cc3ba1c5f5) is 8M, max 195.6M, 187.6M free. Jul 9 13:11:23.005791 systemd-journald[1205]: Received client request to flush runtime journal. Jul 9 13:11:23.005934 kernel: loop0: detected capacity change from 0 to 229808 Jul 9 13:11:22.968166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 13:11:22.971141 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 13:11:22.973053 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 13:11:22.977971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 13:11:22.980296 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 13:11:22.983744 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 13:11:23.000451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 13:11:23.009214 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 13:11:23.016164 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 9 13:11:23.016181 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Jul 9 13:11:23.021104 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 13:11:23.028080 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 13:11:23.029551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 13:11:23.031289 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 13:11:23.038909 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 13:11:23.061799 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 13:11:23.064435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 13:11:23.067905 kernel: loop1: detected capacity change from 0 to 146480 Jul 9 13:11:23.085552 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jul 9 13:11:23.085583 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jul 9 13:11:23.090710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 13:11:23.104921 kernel: loop2: detected capacity change from 0 to 114008 Jul 9 13:11:23.137920 kernel: loop3: detected capacity change from 0 to 229808 Jul 9 13:11:23.147947 kernel: loop4: detected capacity change from 0 to 146480 Jul 9 13:11:23.164911 kernel: loop5: detected capacity change from 0 to 114008 Jul 9 13:11:23.175616 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 13:11:23.176233 (sd-merge)[1279]: Merged extensions into '/usr'. Jul 9 13:11:23.180923 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 13:11:23.180940 systemd[1]: Reloading... Jul 9 13:11:23.236948 zram_generator::config[1305]: No configuration found. Jul 9 13:11:23.299543 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 13:11:23.351296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:11:23.446638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 13:11:23.446758 systemd[1]: Reloading finished in 265 ms. Jul 9 13:11:23.603220 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 13:11:23.604832 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 13:11:23.623518 systemd[1]: Starting ensure-sysext.service... Jul 9 13:11:23.625579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 13:11:23.638093 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Jul 9 13:11:23.638114 systemd[1]: Reloading... Jul 9 13:11:23.645344 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 13:11:23.645385 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 13:11:23.645742 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 13:11:23.646054 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 13:11:23.647037 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 13:11:23.647343 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 9 13:11:23.647419 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Jul 9 13:11:23.652081 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:11:23.652097 systemd-tmpfiles[1343]: Skipping /boot Jul 9 13:11:23.662548 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 13:11:23.662575 systemd-tmpfiles[1343]: Skipping /boot Jul 9 13:11:23.702992 zram_generator::config[1373]: No configuration found. Jul 9 13:11:23.811735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:11:23.919930 systemd[1]: Reloading finished in 281 ms. Jul 9 13:11:23.951376 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 13:11:23.977001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 13:11:23.986594 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:11:23.988889 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 13:11:23.991229 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 13:11:24.003436 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 13:11:24.006508 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 13:11:24.010099 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 13:11:24.015137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.015312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:11:24.024376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:11:24.028095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:11:24.030629 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:11:24.031747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:11:24.031847 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:11:24.036207 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 13:11:24.037294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.039057 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 13:11:24.041493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:11:24.041744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:11:24.048562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:11:24.049027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:11:24.051111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:11:24.051330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:11:24.055734 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Jul 9 13:11:24.058992 augenrules[1441]: No rules Jul 9 13:11:24.060101 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:11:24.060404 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:11:24.062293 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 13:11:24.067919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.068153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:11:24.069923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 13:11:24.073068 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 13:11:24.078149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 13:11:24.078307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:11:24.078425 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:11:24.080013 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 13:11:24.080074 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.084023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.088819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:11:24.091155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 13:11:24.094141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 13:11:24.095362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 13:11:24.095483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 13:11:24.095632 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 13:11:24.097190 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 13:11:24.099134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 13:11:24.101388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 13:11:24.101639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 13:11:24.103491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 13:11:24.103721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 13:11:24.105563 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 13:11:24.109237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 13:11:24.110776 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 13:11:24.112433 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 13:11:24.115564 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 13:11:24.115806 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 13:11:24.127771 augenrules[1453]: /sbin/augenrules: No change Jul 9 13:11:24.129789 systemd[1]: Finished ensure-sysext.service. Jul 9 13:11:24.137704 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 13:11:24.138918 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 13:11:24.138994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 13:11:24.141386 augenrules[1503]: No rules Jul 9 13:11:24.143090 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 13:11:24.144265 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 13:11:24.145982 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:11:24.147180 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:11:24.265164 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 13:11:24.339846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 13:11:24.343751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 13:11:24.349606 systemd-resolved[1412]: Positive Trust Anchors: Jul 9 13:11:24.349629 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 13:11:24.349659 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 13:11:24.356312 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jul 9 13:11:24.357831 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 13:11:24.357944 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 13:11:24.359136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 13:11:24.362569 systemd-networkd[1499]: lo: Link UP Jul 9 13:11:24.362580 systemd-networkd[1499]: lo: Gained carrier Jul 9 13:11:24.368722 systemd-networkd[1499]: Enumeration completed Jul 9 13:11:24.368813 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 13:11:24.368983 systemd[1]: Reached target network.target - Network. Jul 9 13:11:24.372670 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 13:11:24.373323 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:11:24.373333 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 13:11:24.375203 systemd-networkd[1499]: eth0: Link UP Jul 9 13:11:24.375997 systemd-networkd[1499]: eth0: Gained carrier Jul 9 13:11:24.376018 systemd-networkd[1499]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 13:11:24.376669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 13:11:24.378420 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 13:11:24.381044 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 13:11:24.382503 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 13:11:24.383957 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 13:11:24.385233 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 13:11:24.386484 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 13:11:24.386978 systemd-networkd[1499]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 13:11:24.388604 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Jul 9 13:11:24.388902 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 13:11:24.388644 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 13:11:25.258478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 13:11:25.258512 systemd[1]: Reached target paths.target - Path Units. Jul 9 13:11:25.258792 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 13:11:25.258854 systemd-timesyncd[1504]: Initial clock synchronization to Wed 2025-07-09 13:11:25.258357 UTC. Jul 9 13:11:25.258957 systemd-resolved[1412]: Clock change detected. Flushing caches. Jul 9 13:11:25.259431 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 13:11:25.260797 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 13:11:25.262103 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 13:11:25.263282 kernel: ACPI: button: Power Button [PWRF] Jul 9 13:11:25.263834 systemd[1]: Reached target timers.target - Timer Units. Jul 9 13:11:25.266551 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 13:11:25.286360 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 13:11:25.290948 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 13:11:25.293691 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 13:11:25.295375 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 13:11:25.308830 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 9 13:11:25.309120 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 9 13:11:25.309313 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 13:11:25.308769 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 13:11:25.310429 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 13:11:25.312570 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 13:11:25.321264 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 13:11:25.322225 systemd[1]: Reached target basic.target - Basic System. Jul 9 13:11:25.323193 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:11:25.323218 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 13:11:25.326367 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 13:11:25.328448 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 13:11:25.333373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 13:11:25.343608 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 13:11:25.345854 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 13:11:25.347030 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 13:11:25.348497 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 13:11:25.350560 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 13:11:25.353611 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 13:11:25.357718 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 13:11:25.359198 jq[1554]: false Jul 9 13:11:25.363227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 13:11:25.365882 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jul 9 13:11:25.365869 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Jul 9 13:11:25.372390 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 13:11:25.374460 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 13:11:25.374916 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 13:11:25.379106 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Jul 9 13:11:25.379106 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:11:25.379106 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Jul 9 13:11:25.378427 oslogin_cache_refresh[1556]: Failure getting users, quitting Jul 9 13:11:25.378460 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 13:11:25.378535 oslogin_cache_refresh[1556]: Refreshing group entry cache Jul 9 13:11:25.380740 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 13:11:25.387450 oslogin_cache_refresh[1556]: Failure getting groups, quitting Jul 9 13:11:25.391756 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Jul 9 13:11:25.391756 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:11:25.382768 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 13:11:25.387461 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 13:11:25.385599 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 13:11:25.387813 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 13:11:25.389402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 13:11:25.389638 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 13:11:25.389947 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 13:11:25.390185 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 13:11:25.394226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 13:11:25.402586 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 13:11:25.409660 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 13:11:25.409983 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 13:11:25.432635 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 13:11:25.440714 jq[1567]: true Jul 9 13:11:25.441022 extend-filesystems[1555]: Found /dev/vda6 Jul 9 13:11:25.454507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 13:11:25.457846 update_engine[1563]: I20250709 13:11:25.457749 1563 main.cc:92] Flatcar Update Engine starting Jul 9 13:11:25.458796 tar[1573]: linux-amd64/LICENSE Jul 9 13:11:25.460381 tar[1573]: linux-amd64/helm Jul 9 13:11:25.472271 dbus-daemon[1552]: [system] SELinux support is enabled Jul 9 13:11:25.472413 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 13:11:25.476377 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 13:11:25.476402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 13:11:25.477639 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 13:11:25.477657 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 13:11:25.496510 systemd[1]: Started update-engine.service - Update Engine. Jul 9 13:11:25.498744 update_engine[1563]: I20250709 13:11:25.496579 1563 update_check_scheduler.cc:74] Next update check in 10m27s Jul 9 13:11:25.680209 extend-filesystems[1555]: Found /dev/vda9 Jul 9 13:11:25.687735 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 13:11:25.694817 extend-filesystems[1555]: Checking size of /dev/vda9 Jul 9 13:11:25.694979 jq[1590]: true Jul 9 13:11:25.698013 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 13:11:25.698061 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 13:11:25.700756 systemd-logind[1562]: New seat seat0. Jul 9 13:11:25.703672 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 13:11:25.739271 extend-filesystems[1555]: Resized partition /dev/vda9 Jul 9 13:11:25.743608 extend-filesystems[1618]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 13:11:25.751954 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 13:11:25.813946 kernel: kvm_amd: TSC scaling supported Jul 9 13:11:25.814056 kernel: kvm_amd: Nested Virtualization enabled Jul 9 13:11:25.814071 kernel: kvm_amd: Nested Paging enabled Jul 9 13:11:25.814084 kernel: kvm_amd: LBR virtualization supported Jul 9 13:11:25.815326 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 9 13:11:25.815353 kernel: kvm_amd: Virtual GIF supported Jul 9 13:11:25.845267 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 13:11:25.873402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 13:11:25.888157 extend-filesystems[1618]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 13:11:25.888157 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 13:11:25.888157 extend-filesystems[1618]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 13:11:25.896397 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Jul 9 13:11:25.901105 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Jul 9 13:11:25.893702 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 13:11:25.895259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 13:11:25.904310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 13:11:25.947817 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 13:11:25.969289 kernel: EDAC MC: Ver: 3.0.0 Jul 9 13:11:25.996618 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 13:11:26.109960 containerd[1582]: time="2025-07-09T13:11:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 13:11:26.110671 containerd[1582]: time="2025-07-09T13:11:26.110615392Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 13:11:26.124759 containerd[1582]: time="2025-07-09T13:11:26.124707274Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.638µs" Jul 9 13:11:26.124899 containerd[1582]: time="2025-07-09T13:11:26.124882623Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 13:11:26.124960 containerd[1582]: time="2025-07-09T13:11:26.124946843Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 13:11:26.125256 containerd[1582]: time="2025-07-09T13:11:26.125222871Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 13:11:26.125319 containerd[1582]: time="2025-07-09T13:11:26.125306718Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 13:11:26.125432 containerd[1582]: time="2025-07-09T13:11:26.125418658Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:11:26.125619 containerd[1582]: time="2025-07-09T13:11:26.125593836Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 13:11:26.125684 containerd[1582]: time="2025-07-09T13:11:26.125667474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:11:26.126139 containerd[1582]: time="2025-07-09T13:11:26.126113110Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 13:11:26.126213 containerd[1582]: time="2025-07-09T13:11:26.126195945Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:11:26.126319 containerd[1582]: time="2025-07-09T13:11:26.126298427Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 13:11:26.126401 containerd[1582]: time="2025-07-09T13:11:26.126384288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 13:11:26.126623 containerd[1582]: time="2025-07-09T13:11:26.126598179Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 13:11:26.127039 containerd[1582]: time="2025-07-09T13:11:26.127014410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:11:26.127445 containerd[1582]: time="2025-07-09T13:11:26.127403539Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 13:11:26.127518 containerd[1582]: time="2025-07-09T13:11:26.127500040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 13:11:26.127630 containerd[1582]: time="2025-07-09T13:11:26.127608714Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 13:11:26.128145 containerd[1582]: time="2025-07-09T13:11:26.128075750Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 13:11:26.128325 containerd[1582]: time="2025-07-09T13:11:26.128292005Z" level=info msg="metadata content store policy set" policy=shared Jul 9 13:11:26.137405 containerd[1582]: time="2025-07-09T13:11:26.137357603Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 13:11:26.137455 containerd[1582]: time="2025-07-09T13:11:26.137418898Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 13:11:26.137455 containerd[1582]: time="2025-07-09T13:11:26.137437233Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 13:11:26.137455 containerd[1582]: time="2025-07-09T13:11:26.137449015Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 13:11:26.137509 containerd[1582]: time="2025-07-09T13:11:26.137462420Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 13:11:26.137509 containerd[1582]: time="2025-07-09T13:11:26.137474893Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 13:11:26.137509 containerd[1582]: time="2025-07-09T13:11:26.137490743Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 13:11:26.137509 containerd[1582]: time="2025-07-09T13:11:26.137505791Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 13:11:26.137593 containerd[1582]: time="2025-07-09T13:11:26.137519497Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 13:11:26.137593 containerd[1582]: time="2025-07-09T13:11:26.137529846Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 13:11:26.137593 containerd[1582]: time="2025-07-09T13:11:26.137539494Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 13:11:26.137593 containerd[1582]: time="2025-07-09T13:11:26.137553621Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 13:11:26.137970 containerd[1582]: time="2025-07-09T13:11:26.137903417Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 13:11:26.138070 containerd[1582]: time="2025-07-09T13:11:26.138042187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 13:11:26.138126 containerd[1582]: time="2025-07-09T13:11:26.138095186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 13:11:26.138174 containerd[1582]: time="2025-07-09T13:11:26.138126745Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 13:11:26.138174 containerd[1582]: time="2025-07-09T13:11:26.138150039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 13:11:26.138288 containerd[1582]: time="2025-07-09T13:11:26.138178372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 13:11:26.138288 containerd[1582]: time="2025-07-09T13:11:26.138209921Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 13:11:26.138288 containerd[1582]: time="2025-07-09T13:11:26.138277979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 13:11:26.138485 containerd[1582]: time="2025-07-09T13:11:26.138315780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 13:11:26.138485 containerd[1582]: time="2025-07-09T13:11:26.138357799Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 13:11:26.138485 containerd[1582]: time="2025-07-09T13:11:26.138389598Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 13:11:26.138755 containerd[1582]: time="2025-07-09T13:11:26.138649285Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 13:11:26.138821 containerd[1582]: time="2025-07-09T13:11:26.138772857Z" level=info msg="Start snapshots syncer" Jul 9 13:11:26.138966 containerd[1582]: time="2025-07-09T13:11:26.138852967Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 13:11:26.139797 containerd[1582]: time="2025-07-09T13:11:26.139708291Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.139864353Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140179324Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140482783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140545430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140579124Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140604070Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 13:11:26.140647 containerd[1582]: time="2025-07-09T13:11:26.140617896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140667529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140699419Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140767978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140795630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140827780Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 13:11:26.140912 containerd[1582]: time="2025-07-09T13:11:26.140879737Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.140919011Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.140944609Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.140969285Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.140993891Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141021092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141066808Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141110981Z" level=info msg="runtime interface created" Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141126530Z" level=info msg="created NRI interface" Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141148982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 13:11:26.141186 containerd[1582]: time="2025-07-09T13:11:26.141183567Z" level=info msg="Connect containerd service" Jul 9 13:11:26.141605 containerd[1582]: time="2025-07-09T13:11:26.141265781Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 13:11:26.143490 containerd[1582]: time="2025-07-09T13:11:26.143450417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 13:11:26.283424 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 13:11:26.335958 tar[1573]: linux-amd64/README.md Jul 9 13:11:26.359367 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 13:11:26.361449 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 13:11:26.365205 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 13:11:26.437072 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 13:11:26.437412 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 13:11:26.441038 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 13:11:26.554735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 13:11:26.558160 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 13:11:26.562541 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 13:11:26.563830 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 13:11:26.571485 containerd[1582]: time="2025-07-09T13:11:26.571377340Z" level=info msg="Start subscribing containerd event" Jul 9 13:11:26.571751 containerd[1582]: time="2025-07-09T13:11:26.571530858Z" level=info msg="Start recovering state" Jul 9 13:11:26.571751 containerd[1582]: time="2025-07-09T13:11:26.571735161Z" level=info msg="Start event monitor" Jul 9 13:11:26.571826 containerd[1582]: time="2025-07-09T13:11:26.571758214Z" level=info msg="Start cni network conf syncer for default" Jul 9 13:11:26.571826 containerd[1582]: time="2025-07-09T13:11:26.571755149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 13:11:26.571878 containerd[1582]: time="2025-07-09T13:11:26.571785325Z" level=info msg="Start streaming server" Jul 9 13:11:26.571878 containerd[1582]: time="2025-07-09T13:11:26.571851319Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 13:11:26.572255 containerd[1582]: time="2025-07-09T13:11:26.571866638Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 13:11:26.572255 containerd[1582]: time="2025-07-09T13:11:26.571936799Z" level=info msg="runtime interface starting up..." Jul 9 13:11:26.572255 containerd[1582]: time="2025-07-09T13:11:26.571944454Z" level=info msg="starting plugins..." Jul 9 13:11:26.572255 containerd[1582]: time="2025-07-09T13:11:26.571980281Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 13:11:26.572255 containerd[1582]: time="2025-07-09T13:11:26.572139700Z" level=info msg="containerd successfully booted in 0.462935s" Jul 9 13:11:26.572219 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 13:11:26.706580 systemd-networkd[1499]: eth0: Gained IPv6LL Jul 9 13:11:26.710669 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 13:11:26.712929 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 13:11:26.716394 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 13:11:26.719469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:26.740148 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 13:11:26.765391 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 13:11:26.767308 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 13:11:26.767574 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 13:11:26.771058 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 13:11:28.260341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:28.262024 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 13:11:28.264156 systemd[1]: Startup finished in 3.923s (kernel) + 6.475s (initrd) + 5.327s (userspace) = 15.725s. Jul 9 13:11:28.271570 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:11:28.937439 kubelet[1696]: E0709 13:11:28.937324 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:11:28.941670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:11:28.941877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:11:28.942293 systemd[1]: kubelet.service: Consumed 2.037s CPU time, 268.1M memory peak. Jul 9 13:11:29.699452 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 13:11:29.700681 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:35410.service - OpenSSH per-connection server daemon (10.0.0.1:35410). Jul 9 13:11:29.777037 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 35410 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:29.779345 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:29.786011 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 13:11:29.787116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 13:11:29.793434 systemd-logind[1562]: New session 1 of user core. Jul 9 13:11:29.818230 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 13:11:29.821197 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 13:11:29.838513 (systemd)[1714]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 13:11:29.840809 systemd-logind[1562]: New session c1 of user core. Jul 9 13:11:30.015818 systemd[1714]: Queued start job for default target default.target. Jul 9 13:11:30.038289 systemd[1714]: Created slice app.slice - User Application Slice. Jul 9 13:11:30.038321 systemd[1714]: Reached target paths.target - Paths. Jul 9 13:11:30.038369 systemd[1714]: Reached target timers.target - Timers. Jul 9 13:11:30.041121 systemd[1714]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 13:11:30.054029 systemd[1714]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 13:11:30.054208 systemd[1714]: Reached target sockets.target - Sockets. Jul 9 13:11:30.054292 systemd[1714]: Reached target basic.target - Basic System. Jul 9 13:11:30.054347 systemd[1714]: Reached target default.target - Main User Target. Jul 9 13:11:30.054388 systemd[1714]: Startup finished in 207ms. Jul 9 13:11:30.054833 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 13:11:30.056779 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 13:11:30.126203 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:35418.service - OpenSSH per-connection server daemon (10.0.0.1:35418). Jul 9 13:11:30.174514 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 35418 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:30.175845 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:30.179909 systemd-logind[1562]: New session 2 of user core. Jul 9 13:11:30.189364 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 13:11:30.241570 sshd[1728]: Connection closed by 10.0.0.1 port 35418 Jul 9 13:11:30.241984 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:30.254661 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:35418.service: Deactivated successfully. Jul 9 13:11:30.256305 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 13:11:30.257010 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Jul 9 13:11:30.259697 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:35420.service - OpenSSH per-connection server daemon (10.0.0.1:35420). Jul 9 13:11:30.260254 systemd-logind[1562]: Removed session 2. Jul 9 13:11:30.305357 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 35420 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:30.306548 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:30.310607 systemd-logind[1562]: New session 3 of user core. Jul 9 13:11:30.321359 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 13:11:30.369732 sshd[1737]: Connection closed by 10.0.0.1 port 35420 Jul 9 13:11:30.369996 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:30.386865 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:35420.service: Deactivated successfully. Jul 9 13:11:30.388669 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 13:11:30.389372 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Jul 9 13:11:30.391817 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:35422.service - OpenSSH per-connection server daemon (10.0.0.1:35422). Jul 9 13:11:30.392375 systemd-logind[1562]: Removed session 3. Jul 9 13:11:30.445346 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 35422 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:30.446650 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:30.450463 systemd-logind[1562]: New session 4 of user core. Jul 9 13:11:30.462340 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 13:11:30.514803 sshd[1747]: Connection closed by 10.0.0.1 port 35422 Jul 9 13:11:30.515134 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:30.523666 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:35422.service: Deactivated successfully. Jul 9 13:11:30.525263 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 13:11:30.525943 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Jul 9 13:11:30.528325 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:35424.service - OpenSSH per-connection server daemon (10.0.0.1:35424). Jul 9 13:11:30.528846 systemd-logind[1562]: Removed session 4. Jul 9 13:11:30.584147 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 35424 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:30.585540 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:30.589869 systemd-logind[1562]: New session 5 of user core. Jul 9 13:11:30.599359 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 13:11:30.661899 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 13:11:30.662385 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:11:30.687950 sudo[1757]: pam_unix(sudo:session): session closed for user root Jul 9 13:11:30.690046 sshd[1756]: Connection closed by 10.0.0.1 port 35424 Jul 9 13:11:30.690490 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:30.710559 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:35424.service: Deactivated successfully. Jul 9 13:11:30.712386 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 13:11:30.713158 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Jul 9 13:11:30.715884 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:35432.service - OpenSSH per-connection server daemon (10.0.0.1:35432). Jul 9 13:11:30.716626 systemd-logind[1562]: Removed session 5. Jul 9 13:11:30.778424 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 35432 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:30.780146 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:30.784718 systemd-logind[1562]: New session 6 of user core. Jul 9 13:11:30.793370 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 13:11:30.847475 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 13:11:30.847861 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:11:31.116039 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 9 13:11:31.123039 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 13:11:31.123377 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:11:31.134351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 13:11:31.181361 augenrules[1790]: No rules Jul 9 13:11:31.183140 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 13:11:31.183428 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 13:11:31.184501 sudo[1767]: pam_unix(sudo:session): session closed for user root Jul 9 13:11:31.186023 sshd[1766]: Connection closed by 10.0.0.1 port 35432 Jul 9 13:11:31.186413 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jul 9 13:11:31.203146 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:35432.service: Deactivated successfully. Jul 9 13:11:31.205065 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 13:11:31.205918 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Jul 9 13:11:31.208394 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:35442.service - OpenSSH per-connection server daemon (10.0.0.1:35442). Jul 9 13:11:31.208977 systemd-logind[1562]: Removed session 6. Jul 9 13:11:31.271952 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 35442 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:11:31.273163 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:11:31.277373 systemd-logind[1562]: New session 7 of user core. Jul 9 13:11:31.285385 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 13:11:31.338151 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 13:11:31.338481 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 13:11:32.154139 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 13:11:32.179784 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 13:11:32.745826 dockerd[1823]: time="2025-07-09T13:11:32.745721774Z" level=info msg="Starting up" Jul 9 13:11:32.746736 dockerd[1823]: time="2025-07-09T13:11:32.746702914Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 13:11:32.761162 dockerd[1823]: time="2025-07-09T13:11:32.761121779Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 13:11:33.165925 dockerd[1823]: time="2025-07-09T13:11:33.165794751Z" level=info msg="Loading containers: start." Jul 9 13:11:33.176263 kernel: Initializing XFRM netlink socket Jul 9 13:11:33.503908 systemd-networkd[1499]: docker0: Link UP Jul 9 13:11:33.509644 dockerd[1823]: time="2025-07-09T13:11:33.509590869Z" level=info msg="Loading containers: done." Jul 9 13:11:33.525900 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck729223431-merged.mount: Deactivated successfully. Jul 9 13:11:33.527794 dockerd[1823]: time="2025-07-09T13:11:33.527739908Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 13:11:33.527878 dockerd[1823]: time="2025-07-09T13:11:33.527858340Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 13:11:33.527994 dockerd[1823]: time="2025-07-09T13:11:33.527970040Z" level=info msg="Initializing buildkit" Jul 9 13:11:33.556809 dockerd[1823]: time="2025-07-09T13:11:33.556770701Z" level=info msg="Completed buildkit initialization" Jul 9 13:11:33.563930 dockerd[1823]: time="2025-07-09T13:11:33.563869421Z" level=info msg="Daemon has completed initialization" Jul 9 13:11:33.564169 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 13:11:33.564376 dockerd[1823]: time="2025-07-09T13:11:33.564014774Z" level=info msg="API listen on /run/docker.sock" Jul 9 13:11:34.332883 containerd[1582]: time="2025-07-09T13:11:34.332803352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 13:11:34.927699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458996935.mount: Deactivated successfully. Jul 9 13:11:36.329893 containerd[1582]: time="2025-07-09T13:11:36.329831870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:36.330623 containerd[1582]: time="2025-07-09T13:11:36.330485656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 9 13:11:36.331683 containerd[1582]: time="2025-07-09T13:11:36.331644368Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:36.334107 containerd[1582]: time="2025-07-09T13:11:36.334076699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:36.335042 containerd[1582]: time="2025-07-09T13:11:36.335012153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.002152325s" Jul 9 13:11:36.335090 containerd[1582]: time="2025-07-09T13:11:36.335044544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 9 13:11:36.335703 containerd[1582]: time="2025-07-09T13:11:36.335669726Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 13:11:38.139658 containerd[1582]: time="2025-07-09T13:11:38.139591929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:38.140405 containerd[1582]: time="2025-07-09T13:11:38.140343489Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 9 13:11:38.141507 containerd[1582]: time="2025-07-09T13:11:38.141473367Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:38.144078 containerd[1582]: time="2025-07-09T13:11:38.144038457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:38.144974 containerd[1582]: time="2025-07-09T13:11:38.144949345Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.80925361s" Jul 9 13:11:38.145020 containerd[1582]: time="2025-07-09T13:11:38.144977788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 9 13:11:38.145556 containerd[1582]: time="2025-07-09T13:11:38.145522048Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 13:11:39.183662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 13:11:39.185749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:39.585137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:39.650091 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 13:11:40.414201 containerd[1582]: time="2025-07-09T13:11:40.414133675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:40.415374 containerd[1582]: time="2025-07-09T13:11:40.415291195Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 9 13:11:40.416569 containerd[1582]: time="2025-07-09T13:11:40.416509610Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:40.419375 containerd[1582]: time="2025-07-09T13:11:40.419325760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:40.420931 containerd[1582]: time="2025-07-09T13:11:40.420172257Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 2.274615764s" Jul 9 13:11:40.420931 containerd[1582]: time="2025-07-09T13:11:40.420205579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 9 13:11:40.421148 containerd[1582]: time="2025-07-09T13:11:40.421111879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 13:11:40.454401 kubelet[2113]: E0709 13:11:40.454329 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 13:11:40.461877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 13:11:40.462084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 13:11:40.462534 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 111.1M memory peak. Jul 9 13:11:41.591960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922801651.mount: Deactivated successfully. Jul 9 13:11:42.599148 containerd[1582]: time="2025-07-09T13:11:42.599055613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:42.599937 containerd[1582]: time="2025-07-09T13:11:42.599899204Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 9 13:11:42.601219 containerd[1582]: time="2025-07-09T13:11:42.601145852Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:42.602980 containerd[1582]: time="2025-07-09T13:11:42.602939515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:42.603499 containerd[1582]: time="2025-07-09T13:11:42.603444572Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.182294622s" Jul 9 13:11:42.603499 containerd[1582]: time="2025-07-09T13:11:42.603492372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 9 13:11:42.604107 containerd[1582]: time="2025-07-09T13:11:42.604002909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 13:11:43.112057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216430953.mount: Deactivated successfully. Jul 9 13:11:44.727762 containerd[1582]: time="2025-07-09T13:11:44.727698968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:44.728642 containerd[1582]: time="2025-07-09T13:11:44.728609596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 9 13:11:44.730121 containerd[1582]: time="2025-07-09T13:11:44.730091314Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:44.732963 containerd[1582]: time="2025-07-09T13:11:44.732905190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:44.733956 containerd[1582]: time="2025-07-09T13:11:44.733911517Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.12987787s" Jul 9 13:11:44.733956 containerd[1582]: time="2025-07-09T13:11:44.733953826Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 9 13:11:44.734605 containerd[1582]: time="2025-07-09T13:11:44.734459835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 13:11:45.217574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608845391.mount: Deactivated successfully. Jul 9 13:11:45.223377 containerd[1582]: time="2025-07-09T13:11:45.223308636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:11:45.224059 containerd[1582]: time="2025-07-09T13:11:45.224013468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 13:11:45.225231 containerd[1582]: time="2025-07-09T13:11:45.225190896Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:11:45.227548 containerd[1582]: time="2025-07-09T13:11:45.227501127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 13:11:45.228275 containerd[1582]: time="2025-07-09T13:11:45.228203094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.714374ms" Jul 9 13:11:45.228344 containerd[1582]: time="2025-07-09T13:11:45.228272584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 13:11:45.228793 containerd[1582]: time="2025-07-09T13:11:45.228764095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 13:11:45.769688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006489307.mount: Deactivated successfully. Jul 9 13:11:47.523548 containerd[1582]: time="2025-07-09T13:11:47.523476200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:47.524220 containerd[1582]: time="2025-07-09T13:11:47.524155985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 9 13:11:47.525312 containerd[1582]: time="2025-07-09T13:11:47.525264884Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:47.528020 containerd[1582]: time="2025-07-09T13:11:47.527990304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:11:47.529103 containerd[1582]: time="2025-07-09T13:11:47.529047656Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.300257192s" Jul 9 13:11:47.529103 containerd[1582]: time="2025-07-09T13:11:47.529098792Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 9 13:11:50.683677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 13:11:50.685432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:50.851962 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 13:11:50.852083 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 13:11:50.852469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:50.852714 systemd[1]: kubelet.service: Consumed 127ms CPU time, 87.6M memory peak. Jul 9 13:11:50.855890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:50.879947 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... Jul 9 13:11:50.879964 systemd[1]: Reloading... Jul 9 13:11:50.961718 zram_generator::config[2319]: No configuration found. Jul 9 13:11:51.522212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:11:51.641324 systemd[1]: Reloading finished in 760 ms. Jul 9 13:11:51.705919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 13:11:51.706016 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 13:11:51.706371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:51.706413 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.3M memory peak. Jul 9 13:11:51.708017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:51.901104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:51.913697 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:11:51.962195 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:11:51.962195 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 13:11:51.962195 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:11:51.962643 kubelet[2366]: I0709 13:11:51.962253 2366 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:11:52.470936 kubelet[2366]: I0709 13:11:52.470863 2366 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 13:11:52.470936 kubelet[2366]: I0709 13:11:52.470910 2366 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:11:52.471293 kubelet[2366]: I0709 13:11:52.471262 2366 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 13:11:52.507895 kubelet[2366]: E0709 13:11:52.507812 2366 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 13:11:52.508945 kubelet[2366]: I0709 13:11:52.508917 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:11:52.516468 kubelet[2366]: I0709 13:11:52.516407 2366 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:11:52.522185 kubelet[2366]: I0709 13:11:52.522155 2366 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:11:52.522470 kubelet[2366]: I0709 13:11:52.522431 2366 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:11:52.523171 kubelet[2366]: I0709 13:11:52.522457 2366 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:11:52.523171 kubelet[2366]: I0709 13:11:52.522660 2366 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:11:52.523171 kubelet[2366]: I0709 13:11:52.522671 2366 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 13:11:53.562782 kubelet[2366]: I0709 13:11:53.562729 2366 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:11:53.566693 kubelet[2366]: I0709 13:11:53.566653 2366 kubelet.go:480] "Attempting to sync node with API server" Jul 9 13:11:53.566693 kubelet[2366]: I0709 13:11:53.566680 2366 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:11:53.566858 kubelet[2366]: I0709 13:11:53.566728 2366 kubelet.go:386] "Adding apiserver pod source" Jul 9 13:11:53.566858 kubelet[2366]: I0709 13:11:53.566770 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:11:53.573151 kubelet[2366]: E0709 13:11:53.573064 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 13:11:53.573503 kubelet[2366]: I0709 13:11:53.573488 2366 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:11:53.574366 kubelet[2366]: I0709 13:11:53.574324 2366 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 13:11:53.576254 kubelet[2366]: W0709 13:11:53.576209 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 13:11:53.576585 kubelet[2366]: E0709 13:11:53.576537 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 13:11:53.579626 kubelet[2366]: I0709 13:11:53.579592 2366 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 13:11:53.579682 kubelet[2366]: I0709 13:11:53.579662 2366 server.go:1289] "Started kubelet" Jul 9 13:11:53.579874 kubelet[2366]: I0709 13:11:53.579779 2366 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:11:53.580845 kubelet[2366]: I0709 13:11:53.580036 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:11:53.580845 kubelet[2366]: I0709 13:11:53.580550 2366 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:11:53.581145 kubelet[2366]: I0709 13:11:53.580974 2366 server.go:317] "Adding debug handlers to kubelet server" Jul 9 13:11:53.584552 kubelet[2366]: I0709 13:11:53.584260 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:11:53.584822 kubelet[2366]: I0709 13:11:53.584798 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:11:53.586485 kubelet[2366]: E0709 13:11:53.586432 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 13:11:53.586527 kubelet[2366]: I0709 13:11:53.586503 2366 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 13:11:53.586750 kubelet[2366]: I0709 13:11:53.586721 2366 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 13:11:53.587948 kubelet[2366]: I0709 13:11:53.586796 2366 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:11:53.587948 kubelet[2366]: E0709 13:11:53.587443 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 13:11:53.587948 kubelet[2366]: E0709 13:11:53.585904 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18509764a5b3077a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 13:11:53.579620218 +0000 UTC m=+1.659075427,LastTimestamp:2025-07-09 13:11:53.579620218 +0000 UTC m=+1.659075427,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 13:11:53.587948 kubelet[2366]: I0709 13:11:53.587662 2366 factory.go:223] Registration of the systemd container factory successfully Jul 9 13:11:53.587948 kubelet[2366]: E0709 13:11:53.587720 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Jul 9 13:11:53.587948 kubelet[2366]: I0709 13:11:53.587738 2366 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:11:53.588952 kubelet[2366]: I0709 13:11:53.588936 2366 factory.go:223] Registration of the containerd container factory successfully Jul 9 13:11:53.604108 kubelet[2366]: I0709 13:11:53.604061 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 13:11:53.604108 kubelet[2366]: I0709 13:11:53.604079 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 13:11:53.604108 kubelet[2366]: I0709 13:11:53.604108 2366 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:11:53.606083 kubelet[2366]: I0709 13:11:53.606015 2366 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 13:11:53.607349 kubelet[2366]: I0709 13:11:53.607314 2366 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 13:11:53.607405 kubelet[2366]: I0709 13:11:53.607365 2366 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 13:11:53.607405 kubelet[2366]: I0709 13:11:53.607403 2366 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 13:11:53.607462 kubelet[2366]: I0709 13:11:53.607418 2366 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 13:11:53.607484 kubelet[2366]: E0709 13:11:53.607467 2366 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:11:53.607633 kubelet[2366]: I0709 13:11:53.607606 2366 policy_none.go:49] "None policy: Start" Jul 9 13:11:53.607665 kubelet[2366]: I0709 13:11:53.607637 2366 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 13:11:53.607665 kubelet[2366]: I0709 13:11:53.607655 2366 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:11:53.609801 kubelet[2366]: E0709 13:11:53.609631 2366 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 13:11:53.614013 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 13:11:53.627287 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 13:11:53.633260 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 13:11:53.644335 kubelet[2366]: E0709 13:11:53.644289 2366 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 13:11:53.644593 kubelet[2366]: I0709 13:11:53.644566 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:11:53.644659 kubelet[2366]: I0709 13:11:53.644593 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:11:53.645139 kubelet[2366]: I0709 13:11:53.644965 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:11:53.645781 kubelet[2366]: E0709 13:11:53.645757 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 13:11:53.645838 kubelet[2366]: E0709 13:11:53.645802 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 13:11:53.719230 systemd[1]: Created slice kubepods-burstable-podf5364074eda10c42ad5c93a27b5559fb.slice - libcontainer container kubepods-burstable-podf5364074eda10c42ad5c93a27b5559fb.slice. Jul 9 13:11:53.742138 kubelet[2366]: E0709 13:11:53.742070 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:53.745534 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 9 13:11:53.746220 kubelet[2366]: I0709 13:11:53.746166 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:11:53.746788 kubelet[2366]: E0709 13:11:53.746752 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jul 9 13:11:53.758502 kubelet[2366]: E0709 13:11:53.758481 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:53.760690 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 9 13:11:53.762627 kubelet[2366]: E0709 13:11:53.762603 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:53.788352 kubelet[2366]: E0709 13:11:53.788297 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Jul 9 13:11:53.888778 kubelet[2366]: I0709 13:11:53.888631 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:53.888778 kubelet[2366]: I0709 13:11:53.888675 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:53.888778 kubelet[2366]: I0709 13:11:53.888694 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:53.888778 kubelet[2366]: I0709 13:11:53.888708 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:11:53.888778 kubelet[2366]: I0709 13:11:53.888727 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:53.889124 kubelet[2366]: I0709 13:11:53.888794 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:53.889124 kubelet[2366]: I0709 13:11:53.888830 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:53.889124 kubelet[2366]: I0709 13:11:53.888843 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:53.889124 kubelet[2366]: I0709 13:11:53.888859 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:53.948685 kubelet[2366]: I0709 13:11:53.948643 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:11:53.949004 kubelet[2366]: E0709 13:11:53.948971 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Jul 9 13:11:54.042938 kubelet[2366]: E0709 13:11:54.042894 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.043800 containerd[1582]: time="2025-07-09T13:11:54.043717841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5364074eda10c42ad5c93a27b5559fb,Namespace:kube-system,Attempt:0,}" Jul 9 13:11:54.058945 kubelet[2366]: E0709 13:11:54.058904 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.059550 containerd[1582]: time="2025-07-09T13:11:54.059511915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 9 13:11:54.063988 kubelet[2366]: E0709 13:11:54.063950 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.064429 containerd[1582]: time="2025-07-09T13:11:54.064394981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 9 13:11:54.066444 containerd[1582]: time="2025-07-09T13:11:54.066383750Z" level=info msg="connecting to shim 088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e" address="unix:///run/containerd/s/0cd8b99483a4af60821c78a97914c1035d34d125c65ab32e2b3c750c0841eedd" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:11:54.092906 containerd[1582]: time="2025-07-09T13:11:54.092232266Z" level=info msg="connecting to shim cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b" address="unix:///run/containerd/s/0d820d2085bfa89afa32bf09a917023624f5b05d6b4f082161ef8f53bdd756a9" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:11:54.095441 systemd[1]: Started cri-containerd-088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e.scope - libcontainer container 088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e. Jul 9 13:11:54.100343 containerd[1582]: time="2025-07-09T13:11:54.100281648Z" level=info msg="connecting to shim b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81" address="unix:///run/containerd/s/d6b57fcbfd0d0b498560a58252ac1d43379026fe5249d6c2be7781bfbb8814e4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:11:54.120481 systemd[1]: Started cri-containerd-cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b.scope - libcontainer container cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b. Jul 9 13:11:54.125048 systemd[1]: Started cri-containerd-b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81.scope - libcontainer container b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81. Jul 9 13:11:54.145982 containerd[1582]: time="2025-07-09T13:11:54.145869365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5364074eda10c42ad5c93a27b5559fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e\"" Jul 9 13:11:54.146873 kubelet[2366]: E0709 13:11:54.146839 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.151671 containerd[1582]: time="2025-07-09T13:11:54.151640637Z" level=info msg="CreateContainer within sandbox \"088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 13:11:54.161307 containerd[1582]: time="2025-07-09T13:11:54.160659216Z" level=info msg="Container b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:11:54.168110 containerd[1582]: time="2025-07-09T13:11:54.168064622Z" level=info msg="CreateContainer within sandbox \"088a3224ab3e5723032780f015fda21ad2437f9aa1b4f231af6820b11f9df06e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18\"" Jul 9 13:11:54.169309 containerd[1582]: time="2025-07-09T13:11:54.169282034Z" level=info msg="StartContainer for \"b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18\"" Jul 9 13:11:54.170276 containerd[1582]: time="2025-07-09T13:11:54.170252864Z" level=info msg="connecting to shim b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18" address="unix:///run/containerd/s/0cd8b99483a4af60821c78a97914c1035d34d125c65ab32e2b3c750c0841eedd" protocol=ttrpc version=3 Jul 9 13:11:54.172779 containerd[1582]: time="2025-07-09T13:11:54.172713899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81\"" Jul 9 13:11:54.173546 kubelet[2366]: E0709 13:11:54.173511 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.174809 containerd[1582]: time="2025-07-09T13:11:54.174734678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b\"" Jul 9 13:11:54.175550 kubelet[2366]: E0709 13:11:54.175528 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.178650 containerd[1582]: time="2025-07-09T13:11:54.178588384Z" level=info msg="CreateContainer within sandbox \"b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 13:11:54.180851 containerd[1582]: time="2025-07-09T13:11:54.180828474Z" level=info msg="CreateContainer within sandbox \"cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 13:11:54.188703 kubelet[2366]: E0709 13:11:54.188656 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Jul 9 13:11:54.191803 containerd[1582]: time="2025-07-09T13:11:54.191758558Z" level=info msg="Container 24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:11:54.192506 systemd[1]: Started cri-containerd-b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18.scope - libcontainer container b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18. Jul 9 13:11:54.195350 containerd[1582]: time="2025-07-09T13:11:54.195310478Z" level=info msg="Container da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:11:54.200014 containerd[1582]: time="2025-07-09T13:11:54.199979031Z" level=info msg="CreateContainer within sandbox \"b9bfe0c19030dbdac96e766fa67c79a153d4ed77faf842c8790c623ee5deca81\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9\"" Jul 9 13:11:54.200592 containerd[1582]: time="2025-07-09T13:11:54.200545453Z" level=info msg="StartContainer for \"24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9\"" Jul 9 13:11:54.201500 containerd[1582]: time="2025-07-09T13:11:54.201476258Z" level=info msg="connecting to shim 24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9" address="unix:///run/containerd/s/d6b57fcbfd0d0b498560a58252ac1d43379026fe5249d6c2be7781bfbb8814e4" protocol=ttrpc version=3 Jul 9 13:11:54.203790 containerd[1582]: time="2025-07-09T13:11:54.203753959Z" level=info msg="CreateContainer within sandbox \"cd1f6280115b4ad6023f00b9bc910db463110e6e121bf3a39b063d1ea9b5e78b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274\"" Jul 9 13:11:54.204273 containerd[1582]: time="2025-07-09T13:11:54.204211056Z" level=info msg="StartContainer for \"da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274\"" Jul 9 13:11:54.205619 containerd[1582]: time="2025-07-09T13:11:54.205182407Z" level=info msg="connecting to shim da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274" address="unix:///run/containerd/s/0d820d2085bfa89afa32bf09a917023624f5b05d6b4f082161ef8f53bdd756a9" protocol=ttrpc version=3 Jul 9 13:11:54.224401 systemd[1]: Started cri-containerd-24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9.scope - libcontainer container 24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9. Jul 9 13:11:54.228455 systemd[1]: Started cri-containerd-da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274.scope - libcontainer container da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274. Jul 9 13:11:54.246159 containerd[1582]: time="2025-07-09T13:11:54.246114235Z" level=info msg="StartContainer for \"b9c881224fe63ff09cc5adcc188f0d29b27d375cac8f445e20c2c777c4f08b18\" returns successfully" Jul 9 13:11:54.282940 containerd[1582]: time="2025-07-09T13:11:54.282833102Z" level=info msg="StartContainer for \"24a5f15919d98662a05f1467d1e7c723b01634faa35e9efc4a5cbd136c1c58d9\" returns successfully" Jul 9 13:11:54.289969 containerd[1582]: time="2025-07-09T13:11:54.289914330Z" level=info msg="StartContainer for \"da64f8d0d910585273310463b01685b4a6b16ba14b4e5a9958584686a3c47274\" returns successfully" Jul 9 13:11:54.352687 kubelet[2366]: I0709 13:11:54.352638 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:11:54.616551 kubelet[2366]: E0709 13:11:54.616515 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:54.618055 kubelet[2366]: E0709 13:11:54.618036 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.622161 kubelet[2366]: E0709 13:11:54.622123 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:54.623119 kubelet[2366]: E0709 13:11:54.623083 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:54.624963 kubelet[2366]: E0709 13:11:54.624933 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:54.625151 kubelet[2366]: E0709 13:11:54.625139 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:55.627847 kubelet[2366]: E0709 13:11:55.627647 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:55.627847 kubelet[2366]: E0709 13:11:55.627783 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:55.628690 kubelet[2366]: E0709 13:11:55.628570 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:55.628690 kubelet[2366]: E0709 13:11:55.628646 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:56.470218 kubelet[2366]: E0709 13:11:56.470144 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 13:11:56.568946 kubelet[2366]: I0709 13:11:56.568890 2366 apiserver.go:52] "Watching apiserver" Jul 9 13:11:56.587572 kubelet[2366]: I0709 13:11:56.587537 2366 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 13:11:56.628583 kubelet[2366]: E0709 13:11:56.628547 2366 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 13:11:56.629036 kubelet[2366]: E0709 13:11:56.628679 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:56.658222 kubelet[2366]: I0709 13:11:56.658021 2366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 13:11:56.658222 kubelet[2366]: E0709 13:11:56.658068 2366 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 13:11:56.688746 kubelet[2366]: I0709 13:11:56.688125 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:56.693485 kubelet[2366]: E0709 13:11:56.693443 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:56.693485 kubelet[2366]: I0709 13:11:56.693478 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:56.695174 kubelet[2366]: E0709 13:11:56.695128 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:56.695174 kubelet[2366]: I0709 13:11:56.695170 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 13:11:56.696865 kubelet[2366]: E0709 13:11:56.696800 2366 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 13:11:57.821123 kubelet[2366]: I0709 13:11:57.821083 2366 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:57.825828 kubelet[2366]: E0709 13:11:57.825802 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:58.352579 systemd[1]: Reload requested from client PID 2652 ('systemctl') (unit session-7.scope)... Jul 9 13:11:58.352596 systemd[1]: Reloading... Jul 9 13:11:58.433276 zram_generator::config[2695]: No configuration found. Jul 9 13:11:58.567914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 13:11:58.630525 kubelet[2366]: E0709 13:11:58.630423 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:58.703474 systemd[1]: Reloading finished in 350 ms. Jul 9 13:11:58.733945 kubelet[2366]: I0709 13:11:58.733849 2366 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:11:58.734047 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:58.751804 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 13:11:58.752145 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:58.752204 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 132.6M memory peak. Jul 9 13:11:58.754188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 13:11:58.949992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 13:11:58.954967 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 13:11:58.999148 kubelet[2740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:11:58.999148 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 13:11:58.999148 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 13:11:58.999599 kubelet[2740]: I0709 13:11:58.999186 2740 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 13:11:59.005561 kubelet[2740]: I0709 13:11:59.005512 2740 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 13:11:59.005561 kubelet[2740]: I0709 13:11:59.005540 2740 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 13:11:59.005781 kubelet[2740]: I0709 13:11:59.005750 2740 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 13:11:59.006920 kubelet[2740]: I0709 13:11:59.006891 2740 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 13:11:59.009613 kubelet[2740]: I0709 13:11:59.009568 2740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 13:11:59.014743 kubelet[2740]: I0709 13:11:59.014713 2740 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 13:11:59.019576 kubelet[2740]: I0709 13:11:59.019544 2740 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 13:11:59.019802 kubelet[2740]: I0709 13:11:59.019759 2740 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 13:11:59.019964 kubelet[2740]: I0709 13:11:59.019789 2740 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 13:11:59.020063 kubelet[2740]: I0709 13:11:59.019965 2740 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 13:11:59.020063 kubelet[2740]: I0709 13:11:59.019975 2740 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 13:11:59.020063 kubelet[2740]: I0709 13:11:59.020034 2740 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:11:59.020214 kubelet[2740]: I0709 13:11:59.020189 2740 kubelet.go:480] "Attempting to sync node with API server" Jul 9 13:11:59.020214 kubelet[2740]: I0709 13:11:59.020204 2740 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 13:11:59.020279 kubelet[2740]: I0709 13:11:59.020250 2740 kubelet.go:386] "Adding apiserver pod source" Jul 9 13:11:59.020279 kubelet[2740]: I0709 13:11:59.020268 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 13:11:59.021526 kubelet[2740]: I0709 13:11:59.021474 2740 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 13:11:59.021932 kubelet[2740]: I0709 13:11:59.021913 2740 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 13:11:59.026084 kubelet[2740]: I0709 13:11:59.026059 2740 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 13:11:59.026159 kubelet[2740]: I0709 13:11:59.026118 2740 server.go:1289] "Started kubelet" Jul 9 13:11:59.026366 kubelet[2740]: I0709 13:11:59.026326 2740 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 13:11:59.026631 kubelet[2740]: I0709 13:11:59.026537 2740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 13:11:59.027034 kubelet[2740]: I0709 13:11:59.027004 2740 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 13:11:59.028338 kubelet[2740]: I0709 13:11:59.028296 2740 server.go:317] "Adding debug handlers to kubelet server" Jul 9 13:11:59.035553 kubelet[2740]: I0709 13:11:59.035387 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 13:11:59.035661 kubelet[2740]: I0709 13:11:59.035637 2740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 13:11:59.036840 kubelet[2740]: E0709 13:11:59.036818 2740 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 13:11:59.038468 kubelet[2740]: I0709 13:11:59.038441 2740 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 13:11:59.038709 kubelet[2740]: I0709 13:11:59.038690 2740 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 13:11:59.039111 kubelet[2740]: I0709 13:11:59.039093 2740 reconciler.go:26] "Reconciler: start to sync state" Jul 9 13:11:59.040091 kubelet[2740]: I0709 13:11:59.040061 2740 factory.go:223] Registration of the systemd container factory successfully Jul 9 13:11:59.040856 kubelet[2740]: I0709 13:11:59.040809 2740 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 13:11:59.043494 kubelet[2740]: I0709 13:11:59.043438 2740 factory.go:223] Registration of the containerd container factory successfully Jul 9 13:11:59.054687 kubelet[2740]: I0709 13:11:59.054619 2740 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 13:11:59.056630 kubelet[2740]: I0709 13:11:59.056604 2740 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 13:11:59.056630 kubelet[2740]: I0709 13:11:59.056623 2740 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 13:11:59.056708 kubelet[2740]: I0709 13:11:59.056650 2740 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 13:11:59.056708 kubelet[2740]: I0709 13:11:59.056660 2740 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 13:11:59.056753 kubelet[2740]: E0709 13:11:59.056710 2740 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 13:11:59.078982 kubelet[2740]: I0709 13:11:59.078954 2740 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079167 2740 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079193 2740 state_mem.go:36] "Initialized new in-memory state store" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079371 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079383 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079401 2740 policy_none.go:49] "None policy: Start" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079410 2740 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079421 2740 state_mem.go:35] "Initializing new in-memory state store" Jul 9 13:11:59.079664 kubelet[2740]: I0709 13:11:59.079530 2740 state_mem.go:75] "Updated machine memory state" Jul 9 13:11:59.084028 kubelet[2740]: E0709 13:11:59.083983 2740 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 13:11:59.084233 kubelet[2740]: I0709 13:11:59.084213 2740 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 13:11:59.084285 kubelet[2740]: I0709 13:11:59.084228 2740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 13:11:59.084472 kubelet[2740]: I0709 13:11:59.084451 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 13:11:59.086725 kubelet[2740]: E0709 13:11:59.086693 2740 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 13:11:59.157966 kubelet[2740]: I0709 13:11:59.157936 2740 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.158146 kubelet[2740]: I0709 13:11:59.158080 2740 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:59.158207 kubelet[2740]: I0709 13:11:59.158171 2740 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 13:11:59.165623 kubelet[2740]: E0709 13:11:59.165574 2740 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.190983 kubelet[2740]: I0709 13:11:59.190938 2740 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 13:11:59.197227 kubelet[2740]: I0709 13:11:59.197195 2740 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 13:11:59.197336 kubelet[2740]: I0709 13:11:59.197316 2740 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 13:11:59.240106 kubelet[2740]: I0709 13:11:59.239981 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:59.240106 kubelet[2740]: I0709 13:11:59.240023 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.240106 kubelet[2740]: I0709 13:11:59.240043 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.240106 kubelet[2740]: I0709 13:11:59.240102 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 13:11:59.240317 kubelet[2740]: I0709 13:11:59.240139 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:59.240317 kubelet[2740]: I0709 13:11:59.240163 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5364074eda10c42ad5c93a27b5559fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5364074eda10c42ad5c93a27b5559fb\") " pod="kube-system/kube-apiserver-localhost" Jul 9 13:11:59.240317 kubelet[2740]: I0709 13:11:59.240189 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.240317 kubelet[2740]: I0709 13:11:59.240210 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.240317 kubelet[2740]: I0709 13:11:59.240225 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 13:11:59.463355 kubelet[2740]: E0709 13:11:59.463271 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:59.466431 kubelet[2740]: E0709 13:11:59.466392 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:11:59.466431 kubelet[2740]: E0709 13:11:59.466424 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:00.108057 kubelet[2740]: I0709 13:12:00.107965 2740 apiserver.go:52] "Watching apiserver" Jul 9 13:12:00.111092 kubelet[2740]: I0709 13:12:00.111026 2740 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 13:12:00.112433 kubelet[2740]: E0709 13:12:00.112347 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:00.112433 kubelet[2740]: E0709 13:12:00.112360 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:00.127275 kubelet[2740]: E0709 13:12:00.125134 2740 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 13:12:00.127275 kubelet[2740]: E0709 13:12:00.125372 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:00.139122 kubelet[2740]: I0709 13:12:00.139068 2740 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 13:12:00.168514 kubelet[2740]: I0709 13:12:00.168435 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.168414758 podStartE2EDuration="3.168414758s" podCreationTimestamp="2025-07-09 13:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:00.165876106 +0000 UTC m=+1.206923449" watchObservedRunningTime="2025-07-09 13:12:00.168414758 +0000 UTC m=+1.209462101" Jul 9 13:12:00.185424 kubelet[2740]: I0709 13:12:00.184313 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.184289454 podStartE2EDuration="1.184289454s" podCreationTimestamp="2025-07-09 13:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:00.177725637 +0000 UTC m=+1.218772980" watchObservedRunningTime="2025-07-09 13:12:00.184289454 +0000 UTC m=+1.225336807" Jul 9 13:12:01.112153 kubelet[2740]: E0709 13:12:01.112118 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:01.112669 kubelet[2740]: E0709 13:12:01.112192 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:02.113606 kubelet[2740]: E0709 13:12:02.113572 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:02.493795 kubelet[2740]: E0709 13:12:02.493769 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:03.457111 kubelet[2740]: E0709 13:12:03.457042 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:04.942105 kubelet[2740]: E0709 13:12:04.942040 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:04.955915 kubelet[2740]: I0709 13:12:04.955834 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.95579619 podStartE2EDuration="5.95579619s" podCreationTimestamp="2025-07-09 13:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:00.185587965 +0000 UTC m=+1.226635308" watchObservedRunningTime="2025-07-09 13:12:04.95579619 +0000 UTC m=+5.996843533" Jul 9 13:12:05.118650 kubelet[2740]: E0709 13:12:05.118611 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:05.131052 kubelet[2740]: I0709 13:12:05.131015 2740 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 13:12:05.131385 containerd[1582]: time="2025-07-09T13:12:05.131342862Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 13:12:05.131732 kubelet[2740]: I0709 13:12:05.131574 2740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 13:12:05.970793 systemd[1]: Created slice kubepods-besteffort-poda5a82184_e03c_4c33_a5b5_e46e29ef07f5.slice - libcontainer container kubepods-besteffort-poda5a82184_e03c_4c33_a5b5_e46e29ef07f5.slice. Jul 9 13:12:06.045435 kubelet[2740]: I0709 13:12:06.045322 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5a82184-e03c-4c33-a5b5-e46e29ef07f5-xtables-lock\") pod \"kube-proxy-4kclj\" (UID: \"a5a82184-e03c-4c33-a5b5-e46e29ef07f5\") " pod="kube-system/kube-proxy-4kclj" Jul 9 13:12:06.045435 kubelet[2740]: I0709 13:12:06.045370 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5a82184-e03c-4c33-a5b5-e46e29ef07f5-lib-modules\") pod \"kube-proxy-4kclj\" (UID: \"a5a82184-e03c-4c33-a5b5-e46e29ef07f5\") " pod="kube-system/kube-proxy-4kclj" Jul 9 13:12:06.045435 kubelet[2740]: I0709 13:12:06.045386 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9l92\" (UniqueName: \"kubernetes.io/projected/a5a82184-e03c-4c33-a5b5-e46e29ef07f5-kube-api-access-d9l92\") pod \"kube-proxy-4kclj\" (UID: \"a5a82184-e03c-4c33-a5b5-e46e29ef07f5\") " pod="kube-system/kube-proxy-4kclj" Jul 9 13:12:06.045435 kubelet[2740]: I0709 13:12:06.045402 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5a82184-e03c-4c33-a5b5-e46e29ef07f5-kube-proxy\") pod \"kube-proxy-4kclj\" (UID: \"a5a82184-e03c-4c33-a5b5-e46e29ef07f5\") " pod="kube-system/kube-proxy-4kclj" Jul 9 13:12:06.122219 kubelet[2740]: E0709 13:12:06.121278 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:06.133360 systemd[1]: Created slice kubepods-besteffort-pode32074a1_88c3_4684_936b_f0cbf3bb55d8.slice - libcontainer container kubepods-besteffort-pode32074a1_88c3_4684_936b_f0cbf3bb55d8.slice. Jul 9 13:12:06.247251 kubelet[2740]: I0709 13:12:06.247101 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e32074a1-88c3-4684-936b-f0cbf3bb55d8-var-lib-calico\") pod \"tigera-operator-747864d56d-tcnsq\" (UID: \"e32074a1-88c3-4684-936b-f0cbf3bb55d8\") " pod="tigera-operator/tigera-operator-747864d56d-tcnsq" Jul 9 13:12:06.247251 kubelet[2740]: I0709 13:12:06.247158 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whb74\" (UniqueName: \"kubernetes.io/projected/e32074a1-88c3-4684-936b-f0cbf3bb55d8-kube-api-access-whb74\") pod \"tigera-operator-747864d56d-tcnsq\" (UID: \"e32074a1-88c3-4684-936b-f0cbf3bb55d8\") " pod="tigera-operator/tigera-operator-747864d56d-tcnsq" Jul 9 13:12:06.295020 kubelet[2740]: E0709 13:12:06.294985 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:06.295625 containerd[1582]: time="2025-07-09T13:12:06.295582729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kclj,Uid:a5a82184-e03c-4c33-a5b5-e46e29ef07f5,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:06.315867 containerd[1582]: time="2025-07-09T13:12:06.315821402Z" level=info msg="connecting to shim 10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac" address="unix:///run/containerd/s/aecc5436682c2c04b2df32f4190b491b3615f2a7cf2160df1557b25d37cfbe81" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:06.343382 systemd[1]: Started cri-containerd-10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac.scope - libcontainer container 10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac. Jul 9 13:12:06.371283 containerd[1582]: time="2025-07-09T13:12:06.371223270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kclj,Uid:a5a82184-e03c-4c33-a5b5-e46e29ef07f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac\"" Jul 9 13:12:06.372041 kubelet[2740]: E0709 13:12:06.371967 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:06.377362 containerd[1582]: time="2025-07-09T13:12:06.377327518Z" level=info msg="CreateContainer within sandbox \"10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 13:12:06.388038 containerd[1582]: time="2025-07-09T13:12:06.387984259Z" level=info msg="Container bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:06.397052 containerd[1582]: time="2025-07-09T13:12:06.397015353Z" level=info msg="CreateContainer within sandbox \"10259e8a8951af8ce47383e47810350ac9ba52948f210f5c36e3341cf6fc01ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f\"" Jul 9 13:12:06.397549 containerd[1582]: time="2025-07-09T13:12:06.397505295Z" level=info msg="StartContainer for \"bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f\"" Jul 9 13:12:06.398932 containerd[1582]: time="2025-07-09T13:12:06.398908508Z" level=info msg="connecting to shim bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f" address="unix:///run/containerd/s/aecc5436682c2c04b2df32f4190b491b3615f2a7cf2160df1557b25d37cfbe81" protocol=ttrpc version=3 Jul 9 13:12:06.420390 systemd[1]: Started cri-containerd-bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f.scope - libcontainer container bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f. Jul 9 13:12:06.437708 containerd[1582]: time="2025-07-09T13:12:06.437632004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tcnsq,Uid:e32074a1-88c3-4684-936b-f0cbf3bb55d8,Namespace:tigera-operator,Attempt:0,}" Jul 9 13:12:06.460563 containerd[1582]: time="2025-07-09T13:12:06.460511148Z" level=info msg="connecting to shim 19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb" address="unix:///run/containerd/s/6dd9b62123610b63a358efbb68b862f5633be5b669b91e75e1419fe1c507c6a6" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:06.471267 containerd[1582]: time="2025-07-09T13:12:06.471203526Z" level=info msg="StartContainer for \"bd67b56ae34204be7a6509c4c34e59709aa62472ed0be497eb6ad3e0d687b28f\" returns successfully" Jul 9 13:12:06.489534 systemd[1]: Started cri-containerd-19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb.scope - libcontainer container 19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb. Jul 9 13:12:06.537023 containerd[1582]: time="2025-07-09T13:12:06.536890405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-tcnsq,Uid:e32074a1-88c3-4684-936b-f0cbf3bb55d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb\"" Jul 9 13:12:06.538962 containerd[1582]: time="2025-07-09T13:12:06.538889814Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 13:12:07.126285 kubelet[2740]: E0709 13:12:07.126222 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:07.133728 kubelet[2740]: I0709 13:12:07.133600 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4kclj" podStartSLOduration=2.133581046 podStartE2EDuration="2.133581046s" podCreationTimestamp="2025-07-09 13:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:07.133308667 +0000 UTC m=+8.174356010" watchObservedRunningTime="2025-07-09 13:12:07.133581046 +0000 UTC m=+8.174628389" Jul 9 13:12:07.158565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594740837.mount: Deactivated successfully. Jul 9 13:12:08.213439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477090847.mount: Deactivated successfully. Jul 9 13:12:09.214655 containerd[1582]: time="2025-07-09T13:12:09.214595542Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:09.215362 containerd[1582]: time="2025-07-09T13:12:09.215325999Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 9 13:12:09.216484 containerd[1582]: time="2025-07-09T13:12:09.216443652Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:09.218464 containerd[1582]: time="2025-07-09T13:12:09.218432710Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:09.219064 containerd[1582]: time="2025-07-09T13:12:09.219019415Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.680058607s" Jul 9 13:12:09.219064 containerd[1582]: time="2025-07-09T13:12:09.219061245Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 9 13:12:09.224051 containerd[1582]: time="2025-07-09T13:12:09.224005417Z" level=info msg="CreateContainer within sandbox \"19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 13:12:09.234012 containerd[1582]: time="2025-07-09T13:12:09.233964586Z" level=info msg="Container 7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:09.241686 containerd[1582]: time="2025-07-09T13:12:09.241639045Z" level=info msg="CreateContainer within sandbox \"19e004f08bc604763616944344c0062d4d11fb22389200639dedc174435273cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa\"" Jul 9 13:12:09.242175 containerd[1582]: time="2025-07-09T13:12:09.242143052Z" level=info msg="StartContainer for \"7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa\"" Jul 9 13:12:09.242972 containerd[1582]: time="2025-07-09T13:12:09.242949023Z" level=info msg="connecting to shim 7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa" address="unix:///run/containerd/s/6dd9b62123610b63a358efbb68b862f5633be5b669b91e75e1419fe1c507c6a6" protocol=ttrpc version=3 Jul 9 13:12:09.304384 systemd[1]: Started cri-containerd-7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa.scope - libcontainer container 7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa. Jul 9 13:12:09.444082 containerd[1582]: time="2025-07-09T13:12:09.444016314Z" level=info msg="StartContainer for \"7805a19f733c8d3736e3b2db002688b7a61c8577984f425730ccdd5b58d09bfa\" returns successfully" Jul 9 13:12:10.149948 kubelet[2740]: I0709 13:12:10.149814 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-tcnsq" podStartSLOduration=1.468213497 podStartE2EDuration="4.149798166s" podCreationTimestamp="2025-07-09 13:12:06 +0000 UTC" firstStartedPulling="2025-07-09 13:12:06.538223283 +0000 UTC m=+7.579270627" lastFinishedPulling="2025-07-09 13:12:09.219807953 +0000 UTC m=+10.260855296" observedRunningTime="2025-07-09 13:12:10.149485663 +0000 UTC m=+11.190532997" watchObservedRunningTime="2025-07-09 13:12:10.149798166 +0000 UTC m=+11.190845509" Jul 9 13:12:10.541276 update_engine[1563]: I20250709 13:12:10.540292 1563 update_attempter.cc:509] Updating boot flags... Jul 9 13:12:12.498590 kubelet[2740]: E0709 13:12:12.498543 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:13.141975 kubelet[2740]: E0709 13:12:13.141922 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:13.467796 kubelet[2740]: E0709 13:12:13.467757 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:14.569602 sudo[1803]: pam_unix(sudo:session): session closed for user root Jul 9 13:12:14.572730 sshd[1802]: Connection closed by 10.0.0.1 port 35442 Jul 9 13:12:14.574657 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:14.583930 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:35442.service: Deactivated successfully. Jul 9 13:12:14.587961 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 13:12:14.588256 systemd[1]: session-7.scope: Consumed 6.779s CPU time, 225.6M memory peak. Jul 9 13:12:14.590533 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Jul 9 13:12:14.592400 systemd-logind[1562]: Removed session 7. Jul 9 13:12:24.317635 systemd[1]: Created slice kubepods-besteffort-podeae2963e_259e_4cc6_9761_7164c9898219.slice - libcontainer container kubepods-besteffort-podeae2963e_259e_4cc6_9761_7164c9898219.slice. Jul 9 13:12:24.457493 kubelet[2740]: I0709 13:12:24.457340 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eae2963e-259e-4cc6-9761-7164c9898219-typha-certs\") pod \"calico-typha-6679f68445-26lb8\" (UID: \"eae2963e-259e-4cc6-9761-7164c9898219\") " pod="calico-system/calico-typha-6679f68445-26lb8" Jul 9 13:12:24.457493 kubelet[2740]: I0709 13:12:24.457414 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eae2963e-259e-4cc6-9761-7164c9898219-tigera-ca-bundle\") pod \"calico-typha-6679f68445-26lb8\" (UID: \"eae2963e-259e-4cc6-9761-7164c9898219\") " pod="calico-system/calico-typha-6679f68445-26lb8" Jul 9 13:12:24.457493 kubelet[2740]: I0709 13:12:24.457438 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59kt\" (UniqueName: \"kubernetes.io/projected/eae2963e-259e-4cc6-9761-7164c9898219-kube-api-access-k59kt\") pod \"calico-typha-6679f68445-26lb8\" (UID: \"eae2963e-259e-4cc6-9761-7164c9898219\") " pod="calico-system/calico-typha-6679f68445-26lb8" Jul 9 13:12:24.592085 systemd[1]: Created slice kubepods-besteffort-podd5a1228e_ecd2_448d_8c48_10703d323c06.slice - libcontainer container kubepods-besteffort-podd5a1228e_ecd2_448d_8c48_10703d323c06.slice. Jul 9 13:12:24.623024 kubelet[2740]: E0709 13:12:24.622957 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:24.623854 containerd[1582]: time="2025-07-09T13:12:24.623762113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6679f68445-26lb8,Uid:eae2963e-259e-4cc6-9761-7164c9898219,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:24.659009 kubelet[2740]: I0709 13:12:24.658947 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-lib-modules\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659009 kubelet[2740]: I0709 13:12:24.658990 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5a1228e-ecd2-448d-8c48-10703d323c06-node-certs\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659009 kubelet[2740]: I0709 13:12:24.659007 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-cni-bin-dir\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659009 kubelet[2740]: I0709 13:12:24.659021 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-xtables-lock\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659405 kubelet[2740]: I0709 13:12:24.659036 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-cni-log-dir\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659405 kubelet[2740]: I0709 13:12:24.659051 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-var-lib-calico\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659405 kubelet[2740]: I0709 13:12:24.659076 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-flexvol-driver-host\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659405 kubelet[2740]: I0709 13:12:24.659094 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a1228e-ecd2-448d-8c48-10703d323c06-tigera-ca-bundle\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659405 kubelet[2740]: I0709 13:12:24.659110 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx89k\" (UniqueName: \"kubernetes.io/projected/d5a1228e-ecd2-448d-8c48-10703d323c06-kube-api-access-nx89k\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659534 kubelet[2740]: I0709 13:12:24.659126 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-var-run-calico\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659534 kubelet[2740]: I0709 13:12:24.659139 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-cni-net-dir\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.659534 kubelet[2740]: I0709 13:12:24.659164 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5a1228e-ecd2-448d-8c48-10703d323c06-policysync\") pod \"calico-node-66wmv\" (UID: \"d5a1228e-ecd2-448d-8c48-10703d323c06\") " pod="calico-system/calico-node-66wmv" Jul 9 13:12:24.665365 containerd[1582]: time="2025-07-09T13:12:24.665314383Z" level=info msg="connecting to shim 41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233" address="unix:///run/containerd/s/83174dd6f5318324692ae41bb2716309cbdecf867b7970b74db3a4fd8231be52" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:24.705876 systemd[1]: Started cri-containerd-41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233.scope - libcontainer container 41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233. Jul 9 13:12:24.761405 containerd[1582]: time="2025-07-09T13:12:24.761272343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6679f68445-26lb8,Uid:eae2963e-259e-4cc6-9761-7164c9898219,Namespace:calico-system,Attempt:0,} returns sandbox id \"41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233\"" Jul 9 13:12:24.762854 kubelet[2740]: E0709 13:12:24.762812 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:24.767108 kubelet[2740]: E0709 13:12:24.767070 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.767180 kubelet[2740]: W0709 13:12:24.767112 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.768539 kubelet[2740]: E0709 13:12:24.768480 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.768807 containerd[1582]: time="2025-07-09T13:12:24.768762918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 13:12:24.769497 kubelet[2740]: E0709 13:12:24.769471 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.769497 kubelet[2740]: W0709 13:12:24.769491 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.769587 kubelet[2740]: E0709 13:12:24.769537 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.770340 kubelet[2740]: E0709 13:12:24.770320 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.770340 kubelet[2740]: W0709 13:12:24.770336 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.770418 kubelet[2740]: E0709 13:12:24.770350 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.770804 kubelet[2740]: E0709 13:12:24.770785 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.770804 kubelet[2740]: W0709 13:12:24.770800 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.770863 kubelet[2740]: E0709 13:12:24.770813 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.772809 kubelet[2740]: E0709 13:12:24.772778 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.772864 kubelet[2740]: W0709 13:12:24.772820 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.772864 kubelet[2740]: E0709 13:12:24.772835 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.773230 kubelet[2740]: E0709 13:12:24.773196 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.773230 kubelet[2740]: W0709 13:12:24.773216 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.773318 kubelet[2740]: E0709 13:12:24.773230 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.776461 kubelet[2740]: E0709 13:12:24.776389 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.776461 kubelet[2740]: W0709 13:12:24.776416 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.776461 kubelet[2740]: E0709 13:12:24.776449 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.880041 kubelet[2740]: E0709 13:12:24.879804 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:24.894532 kubelet[2740]: E0709 13:12:24.894492 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.894532 kubelet[2740]: W0709 13:12:24.894520 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.894532 kubelet[2740]: E0709 13:12:24.894544 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.894769 kubelet[2740]: E0709 13:12:24.894752 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.894769 kubelet[2740]: W0709 13:12:24.894765 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.894814 kubelet[2740]: E0709 13:12:24.894776 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.894996 kubelet[2740]: E0709 13:12:24.894979 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.894996 kubelet[2740]: W0709 13:12:24.894990 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.895064 kubelet[2740]: E0709 13:12:24.894998 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.895315 kubelet[2740]: E0709 13:12:24.895296 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.895315 kubelet[2740]: W0709 13:12:24.895313 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.895373 kubelet[2740]: E0709 13:12:24.895325 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.895583 kubelet[2740]: E0709 13:12:24.895549 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.895583 kubelet[2740]: W0709 13:12:24.895565 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.895583 kubelet[2740]: E0709 13:12:24.895576 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.895848 kubelet[2740]: E0709 13:12:24.895789 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.895848 kubelet[2740]: W0709 13:12:24.895820 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.895848 kubelet[2740]: E0709 13:12:24.895832 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.896163 kubelet[2740]: E0709 13:12:24.896065 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.896163 kubelet[2740]: W0709 13:12:24.896081 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.896163 kubelet[2740]: E0709 13:12:24.896091 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.896308 kubelet[2740]: E0709 13:12:24.896296 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.896308 kubelet[2740]: W0709 13:12:24.896304 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.896357 kubelet[2740]: E0709 13:12:24.896312 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.896530 kubelet[2740]: E0709 13:12:24.896513 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.896530 kubelet[2740]: W0709 13:12:24.896526 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.896587 kubelet[2740]: E0709 13:12:24.896536 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.896742 kubelet[2740]: E0709 13:12:24.896726 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.896742 kubelet[2740]: W0709 13:12:24.896738 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.896790 kubelet[2740]: E0709 13:12:24.896748 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.896955 kubelet[2740]: E0709 13:12:24.896937 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.896955 kubelet[2740]: W0709 13:12:24.896950 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.897029 kubelet[2740]: E0709 13:12:24.896960 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.897189 kubelet[2740]: E0709 13:12:24.897171 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.897189 kubelet[2740]: W0709 13:12:24.897185 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.897256 kubelet[2740]: E0709 13:12:24.897196 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.897559 kubelet[2740]: E0709 13:12:24.897531 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.897602 kubelet[2740]: W0709 13:12:24.897559 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.897602 kubelet[2740]: E0709 13:12:24.897587 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.897771 kubelet[2740]: E0709 13:12:24.897757 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.897771 kubelet[2740]: W0709 13:12:24.897769 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.897825 kubelet[2740]: E0709 13:12:24.897779 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.897994 kubelet[2740]: E0709 13:12:24.897979 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.897994 kubelet[2740]: W0709 13:12:24.897991 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.898060 kubelet[2740]: E0709 13:12:24.898000 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.898100 containerd[1582]: time="2025-07-09T13:12:24.897994318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-66wmv,Uid:d5a1228e-ecd2-448d-8c48-10703d323c06,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:24.898401 kubelet[2740]: E0709 13:12:24.898341 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.898401 kubelet[2740]: W0709 13:12:24.898353 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.898401 kubelet[2740]: E0709 13:12:24.898362 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.898628 kubelet[2740]: E0709 13:12:24.898608 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.898628 kubelet[2740]: W0709 13:12:24.898619 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.898628 kubelet[2740]: E0709 13:12:24.898628 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.898903 kubelet[2740]: E0709 13:12:24.898869 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.898903 kubelet[2740]: W0709 13:12:24.898877 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.898903 kubelet[2740]: E0709 13:12:24.898896 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.899088 kubelet[2740]: E0709 13:12:24.899069 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.899088 kubelet[2740]: W0709 13:12:24.899080 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.899168 kubelet[2740]: E0709 13:12:24.899101 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.899324 kubelet[2740]: E0709 13:12:24.899305 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.899324 kubelet[2740]: W0709 13:12:24.899317 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.899324 kubelet[2740]: E0709 13:12:24.899326 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.924349 containerd[1582]: time="2025-07-09T13:12:24.924269900Z" level=info msg="connecting to shim 55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050" address="unix:///run/containerd/s/e3b995f3a2ff50f53882b8ec7f88983c84a2ca9a0d4dfc405a8296aaac3364c8" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:24.954452 systemd[1]: Started cri-containerd-55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050.scope - libcontainer container 55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050. Jul 9 13:12:24.963003 kubelet[2740]: E0709 13:12:24.962846 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.963003 kubelet[2740]: W0709 13:12:24.962870 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.963003 kubelet[2740]: E0709 13:12:24.962935 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.963003 kubelet[2740]: I0709 13:12:24.962997 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxcm\" (UniqueName: \"kubernetes.io/projected/2d4b2d60-521c-4619-9873-7765068c2eae-kube-api-access-4qxcm\") pod \"csi-node-driver-hqgmx\" (UID: \"2d4b2d60-521c-4619-9873-7765068c2eae\") " pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:24.963779 kubelet[2740]: E0709 13:12:24.963255 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.963779 kubelet[2740]: W0709 13:12:24.963273 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.963779 kubelet[2740]: E0709 13:12:24.963282 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.963779 kubelet[2740]: I0709 13:12:24.963330 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b2d60-521c-4619-9873-7765068c2eae-socket-dir\") pod \"csi-node-driver-hqgmx\" (UID: \"2d4b2d60-521c-4619-9873-7765068c2eae\") " pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:24.963779 kubelet[2740]: E0709 13:12:24.963737 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.963779 kubelet[2740]: W0709 13:12:24.963748 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.963779 kubelet[2740]: E0709 13:12:24.963758 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.963979 kubelet[2740]: I0709 13:12:24.963784 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b2d60-521c-4619-9873-7765068c2eae-registration-dir\") pod \"csi-node-driver-hqgmx\" (UID: \"2d4b2d60-521c-4619-9873-7765068c2eae\") " pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:24.964504 kubelet[2740]: E0709 13:12:24.964193 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.964504 kubelet[2740]: W0709 13:12:24.964207 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.964504 kubelet[2740]: E0709 13:12:24.964259 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.964599 kubelet[2740]: E0709 13:12:24.964554 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.964633 kubelet[2740]: W0709 13:12:24.964611 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.964633 kubelet[2740]: E0709 13:12:24.964622 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.965263 kubelet[2740]: E0709 13:12:24.965060 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.965263 kubelet[2740]: W0709 13:12:24.965075 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.965263 kubelet[2740]: E0709 13:12:24.965084 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.965569 kubelet[2740]: E0709 13:12:24.965428 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.965569 kubelet[2740]: W0709 13:12:24.965468 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.965569 kubelet[2740]: E0709 13:12:24.965478 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.966421 kubelet[2740]: E0709 13:12:24.966399 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.966421 kubelet[2740]: W0709 13:12:24.966416 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.966493 kubelet[2740]: E0709 13:12:24.966427 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.966628 kubelet[2740]: I0709 13:12:24.966510 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d4b2d60-521c-4619-9873-7765068c2eae-varrun\") pod \"csi-node-driver-hqgmx\" (UID: \"2d4b2d60-521c-4619-9873-7765068c2eae\") " pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:24.966890 kubelet[2740]: E0709 13:12:24.966842 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.966890 kubelet[2740]: W0709 13:12:24.966856 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.966890 kubelet[2740]: E0709 13:12:24.966865 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.967714 kubelet[2740]: E0709 13:12:24.967692 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.967714 kubelet[2740]: W0709 13:12:24.967707 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.967714 kubelet[2740]: E0709 13:12:24.967717 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.968036 kubelet[2740]: E0709 13:12:24.968006 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.968036 kubelet[2740]: W0709 13:12:24.968022 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.968036 kubelet[2740]: E0709 13:12:24.968031 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.968120 kubelet[2740]: I0709 13:12:24.968068 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d4b2d60-521c-4619-9873-7765068c2eae-kubelet-dir\") pod \"csi-node-driver-hqgmx\" (UID: \"2d4b2d60-521c-4619-9873-7765068c2eae\") " pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:24.968462 kubelet[2740]: E0709 13:12:24.968440 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.968462 kubelet[2740]: W0709 13:12:24.968459 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.968462 kubelet[2740]: E0709 13:12:24.968472 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.968859 kubelet[2740]: E0709 13:12:24.968820 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.968859 kubelet[2740]: W0709 13:12:24.968856 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.968947 kubelet[2740]: E0709 13:12:24.968869 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.969454 kubelet[2740]: E0709 13:12:24.969379 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.969454 kubelet[2740]: W0709 13:12:24.969397 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.969454 kubelet[2740]: E0709 13:12:24.969411 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:24.970259 kubelet[2740]: E0709 13:12:24.970171 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:24.970259 kubelet[2740]: W0709 13:12:24.970192 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:24.970259 kubelet[2740]: E0709 13:12:24.970205 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.001683 containerd[1582]: time="2025-07-09T13:12:25.001631277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-66wmv,Uid:d5a1228e-ecd2-448d-8c48-10703d323c06,Namespace:calico-system,Attempt:0,} returns sandbox id \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\"" Jul 9 13:12:25.069478 kubelet[2740]: E0709 13:12:25.069445 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.069478 kubelet[2740]: W0709 13:12:25.069465 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.069478 kubelet[2740]: E0709 13:12:25.069485 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.069700 kubelet[2740]: E0709 13:12:25.069684 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.069700 kubelet[2740]: W0709 13:12:25.069695 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.069765 kubelet[2740]: E0709 13:12:25.069703 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.069921 kubelet[2740]: E0709 13:12:25.069906 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.069921 kubelet[2740]: W0709 13:12:25.069916 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.069972 kubelet[2740]: E0709 13:12:25.069925 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.070288 kubelet[2740]: E0709 13:12:25.070269 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.070329 kubelet[2740]: W0709 13:12:25.070286 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.070329 kubelet[2740]: E0709 13:12:25.070304 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.070508 kubelet[2740]: E0709 13:12:25.070495 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.070508 kubelet[2740]: W0709 13:12:25.070504 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.070567 kubelet[2740]: E0709 13:12:25.070515 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.070726 kubelet[2740]: E0709 13:12:25.070713 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.070726 kubelet[2740]: W0709 13:12:25.070721 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.070785 kubelet[2740]: E0709 13:12:25.070729 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071001 kubelet[2740]: E0709 13:12:25.070962 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071001 kubelet[2740]: W0709 13:12:25.070972 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071001 kubelet[2740]: E0709 13:12:25.070981 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071327 kubelet[2740]: E0709 13:12:25.071135 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071327 kubelet[2740]: W0709 13:12:25.071142 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071327 kubelet[2740]: E0709 13:12:25.071150 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071433 kubelet[2740]: E0709 13:12:25.071339 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071433 kubelet[2740]: W0709 13:12:25.071350 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071433 kubelet[2740]: E0709 13:12:25.071361 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071600 kubelet[2740]: E0709 13:12:25.071579 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071600 kubelet[2740]: W0709 13:12:25.071590 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071600 kubelet[2740]: E0709 13:12:25.071600 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071784 kubelet[2740]: E0709 13:12:25.071766 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071784 kubelet[2740]: W0709 13:12:25.071776 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071784 kubelet[2740]: E0709 13:12:25.071783 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.071961 kubelet[2740]: E0709 13:12:25.071933 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.071961 kubelet[2740]: W0709 13:12:25.071942 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.071961 kubelet[2740]: E0709 13:12:25.071950 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.072187 kubelet[2740]: E0709 13:12:25.072169 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.072187 kubelet[2740]: W0709 13:12:25.072182 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.072297 kubelet[2740]: E0709 13:12:25.072197 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.072418 kubelet[2740]: E0709 13:12:25.072394 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.072418 kubelet[2740]: W0709 13:12:25.072405 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.072418 kubelet[2740]: E0709 13:12:25.072413 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.072583 kubelet[2740]: E0709 13:12:25.072564 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.072583 kubelet[2740]: W0709 13:12:25.072573 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.072583 kubelet[2740]: E0709 13:12:25.072580 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.072772 kubelet[2740]: E0709 13:12:25.072733 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.072772 kubelet[2740]: W0709 13:12:25.072750 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.072772 kubelet[2740]: E0709 13:12:25.072758 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.072998 kubelet[2740]: E0709 13:12:25.072976 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.072998 kubelet[2740]: W0709 13:12:25.072989 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.072998 kubelet[2740]: E0709 13:12:25.072999 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.073213 kubelet[2740]: E0709 13:12:25.073188 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.073213 kubelet[2740]: W0709 13:12:25.073200 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.073213 kubelet[2740]: E0709 13:12:25.073211 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.073455 kubelet[2740]: E0709 13:12:25.073432 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.073455 kubelet[2740]: W0709 13:12:25.073443 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.073455 kubelet[2740]: E0709 13:12:25.073451 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.073893 kubelet[2740]: E0709 13:12:25.073851 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.073967 kubelet[2740]: W0709 13:12:25.073889 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.073967 kubelet[2740]: E0709 13:12:25.073922 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.074294 kubelet[2740]: E0709 13:12:25.074275 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.074294 kubelet[2740]: W0709 13:12:25.074287 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.074348 kubelet[2740]: E0709 13:12:25.074297 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.074480 kubelet[2740]: E0709 13:12:25.074465 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.074480 kubelet[2740]: W0709 13:12:25.074475 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.074552 kubelet[2740]: E0709 13:12:25.074483 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.074717 kubelet[2740]: E0709 13:12:25.074684 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.074717 kubelet[2740]: W0709 13:12:25.074695 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.074717 kubelet[2740]: E0709 13:12:25.074703 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.074921 kubelet[2740]: E0709 13:12:25.074890 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.074921 kubelet[2740]: W0709 13:12:25.074903 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.074921 kubelet[2740]: E0709 13:12:25.074911 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.075154 kubelet[2740]: E0709 13:12:25.075118 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.075154 kubelet[2740]: W0709 13:12:25.075125 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.075154 kubelet[2740]: E0709 13:12:25.075133 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:25.081745 kubelet[2740]: E0709 13:12:25.081709 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:25.081745 kubelet[2740]: W0709 13:12:25.081723 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:25.081745 kubelet[2740]: E0709 13:12:25.081734 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:26.527622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636522301.mount: Deactivated successfully. Jul 9 13:12:27.059685 kubelet[2740]: E0709 13:12:27.059635 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:27.797614 containerd[1582]: time="2025-07-09T13:12:27.797562707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:27.798281 containerd[1582]: time="2025-07-09T13:12:27.798217861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 9 13:12:27.799329 containerd[1582]: time="2025-07-09T13:12:27.799293285Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:27.802101 containerd[1582]: time="2025-07-09T13:12:27.802066436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:27.802513 containerd[1582]: time="2025-07-09T13:12:27.802481618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.033676911s" Jul 9 13:12:27.802513 containerd[1582]: time="2025-07-09T13:12:27.802509200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 9 13:12:27.803475 containerd[1582]: time="2025-07-09T13:12:27.803450502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 13:12:27.818581 containerd[1582]: time="2025-07-09T13:12:27.818536815Z" level=info msg="CreateContainer within sandbox \"41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 13:12:27.826168 containerd[1582]: time="2025-07-09T13:12:27.826116375Z" level=info msg="Container a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:27.834900 containerd[1582]: time="2025-07-09T13:12:27.834860498Z" level=info msg="CreateContainer within sandbox \"41bbee30d6d277faeddb250cab76689ae5e1c5737763793caeeb61c2c94d8233\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6\"" Jul 9 13:12:27.835328 containerd[1582]: time="2025-07-09T13:12:27.835301137Z" level=info msg="StartContainer for \"a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6\"" Jul 9 13:12:27.836444 containerd[1582]: time="2025-07-09T13:12:27.836415556Z" level=info msg="connecting to shim a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6" address="unix:///run/containerd/s/83174dd6f5318324692ae41bb2716309cbdecf867b7970b74db3a4fd8231be52" protocol=ttrpc version=3 Jul 9 13:12:27.858387 systemd[1]: Started cri-containerd-a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6.scope - libcontainer container a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6. Jul 9 13:12:27.909286 containerd[1582]: time="2025-07-09T13:12:27.909216139Z" level=info msg="StartContainer for \"a44aa387e268b57e7080c468240993531a5bc7a4550ddbcb5fa856dc09d7cae6\" returns successfully" Jul 9 13:12:28.173100 kubelet[2740]: E0709 13:12:28.172909 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:28.185280 kubelet[2740]: I0709 13:12:28.185111 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6679f68445-26lb8" podStartSLOduration=1.149152343 podStartE2EDuration="4.185080865s" podCreationTimestamp="2025-07-09 13:12:24 +0000 UTC" firstStartedPulling="2025-07-09 13:12:24.767391664 +0000 UTC m=+25.808439007" lastFinishedPulling="2025-07-09 13:12:27.803320156 +0000 UTC m=+28.844367529" observedRunningTime="2025-07-09 13:12:28.184586515 +0000 UTC m=+29.225633858" watchObservedRunningTime="2025-07-09 13:12:28.185080865 +0000 UTC m=+29.226128208" Jul 9 13:12:28.219902 kubelet[2740]: E0709 13:12:28.219859 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.219902 kubelet[2740]: W0709 13:12:28.219890 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.220072 kubelet[2740]: E0709 13:12:28.219918 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.220227 kubelet[2740]: E0709 13:12:28.220204 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.220227 kubelet[2740]: W0709 13:12:28.220215 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.220227 kubelet[2740]: E0709 13:12:28.220224 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.220462 kubelet[2740]: E0709 13:12:28.220435 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.220462 kubelet[2740]: W0709 13:12:28.220457 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.220524 kubelet[2740]: E0709 13:12:28.220480 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.220736 kubelet[2740]: E0709 13:12:28.220720 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.220736 kubelet[2740]: W0709 13:12:28.220731 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.220800 kubelet[2740]: E0709 13:12:28.220740 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.220981 kubelet[2740]: E0709 13:12:28.220965 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.220981 kubelet[2740]: W0709 13:12:28.220976 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221068 kubelet[2740]: E0709 13:12:28.220985 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221157 kubelet[2740]: E0709 13:12:28.221141 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221157 kubelet[2740]: W0709 13:12:28.221150 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221157 kubelet[2740]: E0709 13:12:28.221158 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221324 kubelet[2740]: E0709 13:12:28.221309 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221324 kubelet[2740]: W0709 13:12:28.221318 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221378 kubelet[2740]: E0709 13:12:28.221326 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221472 kubelet[2740]: E0709 13:12:28.221459 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221472 kubelet[2740]: W0709 13:12:28.221468 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221517 kubelet[2740]: E0709 13:12:28.221475 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221667 kubelet[2740]: E0709 13:12:28.221651 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221667 kubelet[2740]: W0709 13:12:28.221660 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221734 kubelet[2740]: E0709 13:12:28.221669 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221841 kubelet[2740]: E0709 13:12:28.221826 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221841 kubelet[2740]: W0709 13:12:28.221836 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.221890 kubelet[2740]: E0709 13:12:28.221845 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.221996 kubelet[2740]: E0709 13:12:28.221981 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.221996 kubelet[2740]: W0709 13:12:28.221991 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.222054 kubelet[2740]: E0709 13:12:28.221999 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.222151 kubelet[2740]: E0709 13:12:28.222137 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.222151 kubelet[2740]: W0709 13:12:28.222146 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.222206 kubelet[2740]: E0709 13:12:28.222154 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.222334 kubelet[2740]: E0709 13:12:28.222317 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.222334 kubelet[2740]: W0709 13:12:28.222328 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.222394 kubelet[2740]: E0709 13:12:28.222336 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.222520 kubelet[2740]: E0709 13:12:28.222505 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.222520 kubelet[2740]: W0709 13:12:28.222515 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.222570 kubelet[2740]: E0709 13:12:28.222522 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.222732 kubelet[2740]: E0709 13:12:28.222697 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.222732 kubelet[2740]: W0709 13:12:28.222709 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.222732 kubelet[2740]: E0709 13:12:28.222717 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.293794 kubelet[2740]: E0709 13:12:28.293767 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.293794 kubelet[2740]: W0709 13:12:28.293785 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.293794 kubelet[2740]: E0709 13:12:28.293801 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.294075 kubelet[2740]: E0709 13:12:28.294054 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.294075 kubelet[2740]: W0709 13:12:28.294065 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.294075 kubelet[2740]: E0709 13:12:28.294074 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.294322 kubelet[2740]: E0709 13:12:28.294299 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.294322 kubelet[2740]: W0709 13:12:28.294310 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.294322 kubelet[2740]: E0709 13:12:28.294321 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.294647 kubelet[2740]: E0709 13:12:28.294609 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.294647 kubelet[2740]: W0709 13:12:28.294633 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.294792 kubelet[2740]: E0709 13:12:28.294659 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.294908 kubelet[2740]: E0709 13:12:28.294892 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.294908 kubelet[2740]: W0709 13:12:28.294902 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.294955 kubelet[2740]: E0709 13:12:28.294910 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.295085 kubelet[2740]: E0709 13:12:28.295071 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.295085 kubelet[2740]: W0709 13:12:28.295081 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.295138 kubelet[2740]: E0709 13:12:28.295089 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.295334 kubelet[2740]: E0709 13:12:28.295319 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.295334 kubelet[2740]: W0709 13:12:28.295331 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.295404 kubelet[2740]: E0709 13:12:28.295339 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.295542 kubelet[2740]: E0709 13:12:28.295527 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.295542 kubelet[2740]: W0709 13:12:28.295537 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.295613 kubelet[2740]: E0709 13:12:28.295545 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.295754 kubelet[2740]: E0709 13:12:28.295740 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.295754 kubelet[2740]: W0709 13:12:28.295750 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.295833 kubelet[2740]: E0709 13:12:28.295758 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.295999 kubelet[2740]: E0709 13:12:28.295976 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.295999 kubelet[2740]: W0709 13:12:28.295996 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.296079 kubelet[2740]: E0709 13:12:28.296016 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.296220 kubelet[2740]: E0709 13:12:28.296201 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.296220 kubelet[2740]: W0709 13:12:28.296212 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.296220 kubelet[2740]: E0709 13:12:28.296220 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.296492 kubelet[2740]: E0709 13:12:28.296477 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.296492 kubelet[2740]: W0709 13:12:28.296488 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.296550 kubelet[2740]: E0709 13:12:28.296496 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.296876 kubelet[2740]: E0709 13:12:28.296855 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.296914 kubelet[2740]: W0709 13:12:28.296871 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.296914 kubelet[2740]: E0709 13:12:28.296889 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.297091 kubelet[2740]: E0709 13:12:28.297075 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.297091 kubelet[2740]: W0709 13:12:28.297086 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.297149 kubelet[2740]: E0709 13:12:28.297095 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.297286 kubelet[2740]: E0709 13:12:28.297271 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.297286 kubelet[2740]: W0709 13:12:28.297282 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.297340 kubelet[2740]: E0709 13:12:28.297290 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.297472 kubelet[2740]: E0709 13:12:28.297457 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.297472 kubelet[2740]: W0709 13:12:28.297468 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.297528 kubelet[2740]: E0709 13:12:28.297476 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.297741 kubelet[2740]: E0709 13:12:28.297724 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.297741 kubelet[2740]: W0709 13:12:28.297736 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.297794 kubelet[2740]: E0709 13:12:28.297746 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:28.297959 kubelet[2740]: E0709 13:12:28.297944 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:28.297959 kubelet[2740]: W0709 13:12:28.297954 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:28.298007 kubelet[2740]: E0709 13:12:28.297963 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.058053 kubelet[2740]: E0709 13:12:29.057991 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:29.175743 kubelet[2740]: E0709 13:12:29.175708 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:29.208368 containerd[1582]: time="2025-07-09T13:12:29.208311854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:29.209047 containerd[1582]: time="2025-07-09T13:12:29.209014296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 9 13:12:29.210057 containerd[1582]: time="2025-07-09T13:12:29.210023435Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:29.212328 containerd[1582]: time="2025-07-09T13:12:29.212287125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:29.212775 containerd[1582]: time="2025-07-09T13:12:29.212738674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.409264389s" Jul 9 13:12:29.212775 containerd[1582]: time="2025-07-09T13:12:29.212769823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 9 13:12:29.217154 containerd[1582]: time="2025-07-09T13:12:29.217102116Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 13:12:29.226611 kubelet[2740]: E0709 13:12:29.226571 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.226611 kubelet[2740]: W0709 13:12:29.226595 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.226786 kubelet[2740]: E0709 13:12:29.226630 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.227006 kubelet[2740]: E0709 13:12:29.226967 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.227006 kubelet[2740]: W0709 13:12:29.226994 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.227203 kubelet[2740]: E0709 13:12:29.227023 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.227419 kubelet[2740]: E0709 13:12:29.227392 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.227419 kubelet[2740]: W0709 13:12:29.227411 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.227542 kubelet[2740]: E0709 13:12:29.227432 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227627 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228664 kubelet[2740]: W0709 13:12:29.227641 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227649 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227805 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228664 kubelet[2740]: W0709 13:12:29.227812 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227820 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227948 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228664 kubelet[2740]: W0709 13:12:29.227955 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.227961 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228664 kubelet[2740]: E0709 13:12:29.228085 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228922 containerd[1582]: time="2025-07-09T13:12:29.227645496Z" level=info msg="Container 0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:29.228949 kubelet[2740]: W0709 13:12:29.228090 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228098 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228219 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228949 kubelet[2740]: W0709 13:12:29.228225 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228232 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228394 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228949 kubelet[2740]: W0709 13:12:29.228400 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228407 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.228949 kubelet[2740]: E0709 13:12:29.228529 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.228949 kubelet[2740]: W0709 13:12:29.228534 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229183 kubelet[2740]: E0709 13:12:29.228541 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.229183 kubelet[2740]: E0709 13:12:29.228672 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.229183 kubelet[2740]: W0709 13:12:29.228678 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229183 kubelet[2740]: E0709 13:12:29.228686 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.229183 kubelet[2740]: E0709 13:12:29.228992 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.229183 kubelet[2740]: W0709 13:12:29.229001 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229183 kubelet[2740]: E0709 13:12:29.229010 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.229347 kubelet[2740]: E0709 13:12:29.229248 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.229347 kubelet[2740]: W0709 13:12:29.229256 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229347 kubelet[2740]: E0709 13:12:29.229266 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.229439 kubelet[2740]: E0709 13:12:29.229423 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.229439 kubelet[2740]: W0709 13:12:29.229436 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229497 kubelet[2740]: E0709 13:12:29.229444 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.229618 kubelet[2740]: E0709 13:12:29.229597 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.229618 kubelet[2740]: W0709 13:12:29.229612 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.229618 kubelet[2740]: E0709 13:12:29.229620 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.238085 containerd[1582]: time="2025-07-09T13:12:29.238041449Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\"" Jul 9 13:12:29.238642 containerd[1582]: time="2025-07-09T13:12:29.238614097Z" level=info msg="StartContainer for \"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\"" Jul 9 13:12:29.240090 containerd[1582]: time="2025-07-09T13:12:29.240036083Z" level=info msg="connecting to shim 0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9" address="unix:///run/containerd/s/e3b995f3a2ff50f53882b8ec7f88983c84a2ca9a0d4dfc405a8296aaac3364c8" protocol=ttrpc version=3 Jul 9 13:12:29.277380 systemd[1]: Started cri-containerd-0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9.scope - libcontainer container 0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9. Jul 9 13:12:29.300941 kubelet[2740]: E0709 13:12:29.300907 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.301020 kubelet[2740]: W0709 13:12:29.300939 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.301020 kubelet[2740]: E0709 13:12:29.300969 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.302571 kubelet[2740]: E0709 13:12:29.302296 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.302571 kubelet[2740]: W0709 13:12:29.302317 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.302571 kubelet[2740]: E0709 13:12:29.302331 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.306561 kubelet[2740]: E0709 13:12:29.306535 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.306561 kubelet[2740]: W0709 13:12:29.306553 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.306633 kubelet[2740]: E0709 13:12:29.306566 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.306923 kubelet[2740]: E0709 13:12:29.306881 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.306923 kubelet[2740]: W0709 13:12:29.306904 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.306923 kubelet[2740]: E0709 13:12:29.306916 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.307191 kubelet[2740]: E0709 13:12:29.307158 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.307191 kubelet[2740]: W0709 13:12:29.307178 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.307272 kubelet[2740]: E0709 13:12:29.307192 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.307471 kubelet[2740]: E0709 13:12:29.307453 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.307471 kubelet[2740]: W0709 13:12:29.307467 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.307555 kubelet[2740]: E0709 13:12:29.307479 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.307771 kubelet[2740]: E0709 13:12:29.307745 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.307771 kubelet[2740]: W0709 13:12:29.307767 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.307836 kubelet[2740]: E0709 13:12:29.307792 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.308162 kubelet[2740]: E0709 13:12:29.308086 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.308429 kubelet[2740]: W0709 13:12:29.308214 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.308429 kubelet[2740]: E0709 13:12:29.308250 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.310133 kubelet[2740]: E0709 13:12:29.310114 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.310133 kubelet[2740]: W0709 13:12:29.310128 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.310209 kubelet[2740]: E0709 13:12:29.310139 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.310776 kubelet[2740]: E0709 13:12:29.310749 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.311116 kubelet[2740]: W0709 13:12:29.311089 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.311116 kubelet[2740]: E0709 13:12:29.311110 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.311450 kubelet[2740]: E0709 13:12:29.311403 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.311450 kubelet[2740]: W0709 13:12:29.311419 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.311450 kubelet[2740]: E0709 13:12:29.311429 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.311901 kubelet[2740]: E0709 13:12:29.311673 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.311901 kubelet[2740]: W0709 13:12:29.311687 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.311901 kubelet[2740]: E0709 13:12:29.311719 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.312095 kubelet[2740]: E0709 13:12:29.312043 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.312095 kubelet[2740]: W0709 13:12:29.312056 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.312095 kubelet[2740]: E0709 13:12:29.312066 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.313391 kubelet[2740]: E0709 13:12:29.313317 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.313391 kubelet[2740]: W0709 13:12:29.313331 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.313391 kubelet[2740]: E0709 13:12:29.313341 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.313851 kubelet[2740]: E0709 13:12:29.313824 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.313851 kubelet[2740]: W0709 13:12:29.313841 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.313851 kubelet[2740]: E0709 13:12:29.313851 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.314127 kubelet[2740]: E0709 13:12:29.314108 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.314127 kubelet[2740]: W0709 13:12:29.314119 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.314127 kubelet[2740]: E0709 13:12:29.314129 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.314659 kubelet[2740]: E0709 13:12:29.314481 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.314659 kubelet[2740]: W0709 13:12:29.314529 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.314659 kubelet[2740]: E0709 13:12:29.314539 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.315633 kubelet[2740]: E0709 13:12:29.315560 2740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 13:12:29.315633 kubelet[2740]: W0709 13:12:29.315580 2740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 13:12:29.315633 kubelet[2740]: E0709 13:12:29.315596 2740 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 13:12:29.335576 containerd[1582]: time="2025-07-09T13:12:29.335497821Z" level=info msg="StartContainer for \"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\" returns successfully" Jul 9 13:12:29.345762 systemd[1]: cri-containerd-0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9.scope: Deactivated successfully. Jul 9 13:12:29.349151 containerd[1582]: time="2025-07-09T13:12:29.349101991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\" id:\"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\" pid:3468 exited_at:{seconds:1752066749 nanos:348393988}" Jul 9 13:12:29.349657 containerd[1582]: time="2025-07-09T13:12:29.349195157Z" level=info msg="received exit event container_id:\"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\" id:\"0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9\" pid:3468 exited_at:{seconds:1752066749 nanos:348393988}" Jul 9 13:12:29.374256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c68fb3ae269f12ab5ccae90a5fa3e70b04f14c61374b011e710176a219807e9-rootfs.mount: Deactivated successfully. Jul 9 13:12:30.179254 kubelet[2740]: E0709 13:12:30.179201 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:30.180228 containerd[1582]: time="2025-07-09T13:12:30.180179594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 13:12:31.057439 kubelet[2740]: E0709 13:12:31.057386 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:33.057595 kubelet[2740]: E0709 13:12:33.057542 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:35.059901 kubelet[2740]: E0709 13:12:35.059843 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:35.462365 containerd[1582]: time="2025-07-09T13:12:35.462297015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:35.463093 containerd[1582]: time="2025-07-09T13:12:35.463031014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 9 13:12:35.464366 containerd[1582]: time="2025-07-09T13:12:35.464330196Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:35.466516 containerd[1582]: time="2025-07-09T13:12:35.466462424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:35.467059 containerd[1582]: time="2025-07-09T13:12:35.467024440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.286792328s" Jul 9 13:12:35.467059 containerd[1582]: time="2025-07-09T13:12:35.467055398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 9 13:12:35.472804 containerd[1582]: time="2025-07-09T13:12:35.472757546Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 13:12:35.482682 containerd[1582]: time="2025-07-09T13:12:35.482625784Z" level=info msg="Container 1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:35.495412 containerd[1582]: time="2025-07-09T13:12:35.495353898Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\"" Jul 9 13:12:35.495857 containerd[1582]: time="2025-07-09T13:12:35.495825555Z" level=info msg="StartContainer for \"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\"" Jul 9 13:12:35.498546 containerd[1582]: time="2025-07-09T13:12:35.498497357Z" level=info msg="connecting to shim 1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10" address="unix:///run/containerd/s/e3b995f3a2ff50f53882b8ec7f88983c84a2ca9a0d4dfc405a8296aaac3364c8" protocol=ttrpc version=3 Jul 9 13:12:35.536445 systemd[1]: Started cri-containerd-1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10.scope - libcontainer container 1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10. Jul 9 13:12:35.583353 containerd[1582]: time="2025-07-09T13:12:35.583305132Z" level=info msg="StartContainer for \"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\" returns successfully" Jul 9 13:12:37.004144 systemd[1]: cri-containerd-1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10.scope: Deactivated successfully. Jul 9 13:12:37.004508 systemd[1]: cri-containerd-1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10.scope: Consumed 632ms CPU time, 175.3M memory peak, 3.1M read from disk, 171.2M written to disk. Jul 9 13:12:37.006265 containerd[1582]: time="2025-07-09T13:12:37.005944352Z" level=info msg="received exit event container_id:\"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\" id:\"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\" pid:3542 exited_at:{seconds:1752066757 nanos:5725932}" Jul 9 13:12:37.006265 containerd[1582]: time="2025-07-09T13:12:37.006004064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\" id:\"1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10\" pid:3542 exited_at:{seconds:1752066757 nanos:5725932}" Jul 9 13:12:37.028750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7c8d451f579e92583092d6e3241ddcff5b827f23f31bcd2db6c9ceb3cb9c10-rootfs.mount: Deactivated successfully. Jul 9 13:12:37.050211 kubelet[2740]: I0709 13:12:37.050164 2740 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 13:12:37.066567 systemd[1]: Created slice kubepods-besteffort-pod2d4b2d60_521c_4619_9873_7765068c2eae.slice - libcontainer container kubepods-besteffort-pod2d4b2d60_521c_4619_9873_7765068c2eae.slice. Jul 9 13:12:37.069089 containerd[1582]: time="2025-07-09T13:12:37.069044361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqgmx,Uid:2d4b2d60-521c-4619-9873-7765068c2eae,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:37.183607 systemd[1]: Created slice kubepods-besteffort-pod49893d27_f7b5_4c17_84a6_fe21e163be5c.slice - libcontainer container kubepods-besteffort-pod49893d27_f7b5_4c17_84a6_fe21e163be5c.slice. Jul 9 13:12:37.202220 systemd[1]: Created slice kubepods-besteffort-pod7421a15b_bca1_4ab8_80b3_1f8653a0c6e0.slice - libcontainer container kubepods-besteffort-pod7421a15b_bca1_4ab8_80b3_1f8653a0c6e0.slice. Jul 9 13:12:37.209752 systemd[1]: Created slice kubepods-besteffort-pode1daf40f_15e0_47a5_80e3_a298a5e667e5.slice - libcontainer container kubepods-besteffort-pode1daf40f_15e0_47a5_80e3_a298a5e667e5.slice. Jul 9 13:12:37.215316 containerd[1582]: time="2025-07-09T13:12:37.215281667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 13:12:37.220016 systemd[1]: Created slice kubepods-besteffort-pod83c1d61b_e6f8_4e2c_9e06_eda1b3ca4eb0.slice - libcontainer container kubepods-besteffort-pod83c1d61b_e6f8_4e2c_9e06_eda1b3ca4eb0.slice. Jul 9 13:12:37.232067 systemd[1]: Created slice kubepods-burstable-pod9d5b0f59_c5bf_4645_84e1_dc5cb96628a5.slice - libcontainer container kubepods-burstable-pod9d5b0f59_c5bf_4645_84e1_dc5cb96628a5.slice. Jul 9 13:12:37.239149 systemd[1]: Created slice kubepods-besteffort-podbb191e2f_c363_4484_90a4_9625a2d502f6.slice - libcontainer container kubepods-besteffort-podbb191e2f_c363_4484_90a4_9625a2d502f6.slice. Jul 9 13:12:37.247916 systemd[1]: Created slice kubepods-burstable-podd93773c1_6988_4e7a_96c1_518e39a227cb.slice - libcontainer container kubepods-burstable-podd93773c1_6988_4e7a_96c1_518e39a227cb.slice. Jul 9 13:12:37.267698 kubelet[2740]: I0709 13:12:37.267520 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d93773c1-6988-4e7a-96c1-518e39a227cb-config-volume\") pod \"coredns-674b8bbfcf-9pcn2\" (UID: \"d93773c1-6988-4e7a-96c1-518e39a227cb\") " pod="kube-system/coredns-674b8bbfcf-9pcn2" Jul 9 13:12:37.267698 kubelet[2740]: I0709 13:12:37.267578 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7mk6\" (UniqueName: \"kubernetes.io/projected/d93773c1-6988-4e7a-96c1-518e39a227cb-kube-api-access-j7mk6\") pod \"coredns-674b8bbfcf-9pcn2\" (UID: \"d93773c1-6988-4e7a-96c1-518e39a227cb\") " pod="kube-system/coredns-674b8bbfcf-9pcn2" Jul 9 13:12:37.267698 kubelet[2740]: I0709 13:12:37.267603 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4fx\" (UniqueName: \"kubernetes.io/projected/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-kube-api-access-jx4fx\") pod \"whisker-5c89f47c4-gmj6b\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:37.267698 kubelet[2740]: I0709 13:12:37.267638 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49893d27-f7b5-4c17-84a6-fe21e163be5c-tigera-ca-bundle\") pod \"calico-kube-controllers-76484857f5-49hts\" (UID: \"49893d27-f7b5-4c17-84a6-fe21e163be5c\") " pod="calico-system/calico-kube-controllers-76484857f5-49hts" Jul 9 13:12:37.267698 kubelet[2740]: I0709 13:12:37.267657 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1daf40f-15e0-47a5-80e3-a298a5e667e5-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-6bz26\" (UID: \"e1daf40f-15e0-47a5-80e3-a298a5e667e5\") " pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.268117 kubelet[2740]: I0709 13:12:37.267680 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjg7\" (UniqueName: \"kubernetes.io/projected/e1daf40f-15e0-47a5-80e3-a298a5e667e5-kube-api-access-dkjg7\") pod \"goldmane-768f4c5c69-6bz26\" (UID: \"e1daf40f-15e0-47a5-80e3-a298a5e667e5\") " pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.268117 kubelet[2740]: I0709 13:12:37.267700 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-ca-bundle\") pod \"whisker-5c89f47c4-gmj6b\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:37.268117 kubelet[2740]: I0709 13:12:37.267726 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7421a15b-bca1-4ab8-80b3-1f8653a0c6e0-calico-apiserver-certs\") pod \"calico-apiserver-54d8dc767f-ngpsf\" (UID: \"7421a15b-bca1-4ab8-80b3-1f8653a0c6e0\") " pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" Jul 9 13:12:37.268117 kubelet[2740]: I0709 13:12:37.267746 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5nkv\" (UniqueName: \"kubernetes.io/projected/49893d27-f7b5-4c17-84a6-fe21e163be5c-kube-api-access-p5nkv\") pod \"calico-kube-controllers-76484857f5-49hts\" (UID: \"49893d27-f7b5-4c17-84a6-fe21e163be5c\") " pod="calico-system/calico-kube-controllers-76484857f5-49hts" Jul 9 13:12:37.268117 kubelet[2740]: I0709 13:12:37.267763 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e1daf40f-15e0-47a5-80e3-a298a5e667e5-goldmane-key-pair\") pod \"goldmane-768f4c5c69-6bz26\" (UID: \"e1daf40f-15e0-47a5-80e3-a298a5e667e5\") " pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.268376 kubelet[2740]: I0709 13:12:37.267787 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1daf40f-15e0-47a5-80e3-a298a5e667e5-config\") pod \"goldmane-768f4c5c69-6bz26\" (UID: \"e1daf40f-15e0-47a5-80e3-a298a5e667e5\") " pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.268376 kubelet[2740]: I0709 13:12:37.267807 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-backend-key-pair\") pod \"whisker-5c89f47c4-gmj6b\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:37.268376 kubelet[2740]: I0709 13:12:37.267832 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbm45\" (UniqueName: \"kubernetes.io/projected/7421a15b-bca1-4ab8-80b3-1f8653a0c6e0-kube-api-access-cbm45\") pod \"calico-apiserver-54d8dc767f-ngpsf\" (UID: \"7421a15b-bca1-4ab8-80b3-1f8653a0c6e0\") " pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" Jul 9 13:12:37.294285 containerd[1582]: time="2025-07-09T13:12:37.294167227Z" level=error msg="Failed to destroy network for sandbox \"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.296934 containerd[1582]: time="2025-07-09T13:12:37.296874624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqgmx,Uid:2d4b2d60-521c-4619-9873-7765068c2eae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.298300 systemd[1]: run-netns-cni\x2d9c39aad3\x2d037e\x2d0c27\x2d57cd\x2d3832d78723c6.mount: Deactivated successfully. Jul 9 13:12:37.303873 kubelet[2740]: E0709 13:12:37.303812 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.303950 kubelet[2740]: E0709 13:12:37.303902 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:37.304565 kubelet[2740]: E0709 13:12:37.304534 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hqgmx" Jul 9 13:12:37.304656 kubelet[2740]: E0709 13:12:37.304624 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hqgmx_calico-system(2d4b2d60-521c-4619-9873-7765068c2eae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hqgmx_calico-system(2d4b2d60-521c-4619-9873-7765068c2eae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0189d9fe2216950bf72943523129c990546575258a20c6b7546777ed6a16e95b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hqgmx" podUID="2d4b2d60-521c-4619-9873-7765068c2eae" Jul 9 13:12:37.369384 kubelet[2740]: I0709 13:12:37.369304 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d5b0f59-c5bf-4645-84e1-dc5cb96628a5-config-volume\") pod \"coredns-674b8bbfcf-kc5tm\" (UID: \"9d5b0f59-c5bf-4645-84e1-dc5cb96628a5\") " pod="kube-system/coredns-674b8bbfcf-kc5tm" Jul 9 13:12:37.369569 kubelet[2740]: I0709 13:12:37.369410 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bb191e2f-c363-4484-90a4-9625a2d502f6-calico-apiserver-certs\") pod \"calico-apiserver-54d8dc767f-nlltf\" (UID: \"bb191e2f-c363-4484-90a4-9625a2d502f6\") " pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" Jul 9 13:12:37.369569 kubelet[2740]: I0709 13:12:37.369476 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lmz2\" (UniqueName: \"kubernetes.io/projected/9d5b0f59-c5bf-4645-84e1-dc5cb96628a5-kube-api-access-2lmz2\") pod \"coredns-674b8bbfcf-kc5tm\" (UID: \"9d5b0f59-c5bf-4645-84e1-dc5cb96628a5\") " pod="kube-system/coredns-674b8bbfcf-kc5tm" Jul 9 13:12:37.369569 kubelet[2740]: I0709 13:12:37.369554 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58jwf\" (UniqueName: \"kubernetes.io/projected/bb191e2f-c363-4484-90a4-9625a2d502f6-kube-api-access-58jwf\") pod \"calico-apiserver-54d8dc767f-nlltf\" (UID: \"bb191e2f-c363-4484-90a4-9625a2d502f6\") " pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" Jul 9 13:12:37.496123 containerd[1582]: time="2025-07-09T13:12:37.496061837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76484857f5-49hts,Uid:49893d27-f7b5-4c17-84a6-fe21e163be5c,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:37.510509 containerd[1582]: time="2025-07-09T13:12:37.510463879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-ngpsf,Uid:7421a15b-bca1-4ab8-80b3-1f8653a0c6e0,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:12:37.515329 containerd[1582]: time="2025-07-09T13:12:37.515276222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:37.527072 containerd[1582]: time="2025-07-09T13:12:37.526984341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c89f47c4-gmj6b,Uid:83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:37.537462 kubelet[2740]: E0709 13:12:37.537423 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:37.538007 containerd[1582]: time="2025-07-09T13:12:37.537940970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kc5tm,Uid:9d5b0f59-c5bf-4645-84e1-dc5cb96628a5,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:37.546683 containerd[1582]: time="2025-07-09T13:12:37.546650245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-nlltf,Uid:bb191e2f-c363-4484-90a4-9625a2d502f6,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:12:37.555102 kubelet[2740]: E0709 13:12:37.555075 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:37.555583 containerd[1582]: time="2025-07-09T13:12:37.555526504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pcn2,Uid:d93773c1-6988-4e7a-96c1-518e39a227cb,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:37.882003 containerd[1582]: time="2025-07-09T13:12:37.881294687Z" level=error msg="Failed to destroy network for sandbox \"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.884012 containerd[1582]: time="2025-07-09T13:12:37.883952802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76484857f5-49hts,Uid:49893d27-f7b5-4c17-84a6-fe21e163be5c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.886974 kubelet[2740]: E0709 13:12:37.885407 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.886974 kubelet[2740]: E0709 13:12:37.885477 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76484857f5-49hts" Jul 9 13:12:37.886974 kubelet[2740]: E0709 13:12:37.885497 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76484857f5-49hts" Jul 9 13:12:37.887336 kubelet[2740]: E0709 13:12:37.885555 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76484857f5-49hts_calico-system(49893d27-f7b5-4c17-84a6-fe21e163be5c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76484857f5-49hts_calico-system(49893d27-f7b5-4c17-84a6-fe21e163be5c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa63f5e71c63448e8ad97d440e59bef2f93f0757ab80fa9661e98a09659cb184\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76484857f5-49hts" podUID="49893d27-f7b5-4c17-84a6-fe21e163be5c" Jul 9 13:12:37.888530 containerd[1582]: time="2025-07-09T13:12:37.888477955Z" level=error msg="Failed to destroy network for sandbox \"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.896175 containerd[1582]: time="2025-07-09T13:12:37.896115156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.897078 kubelet[2740]: E0709 13:12:37.896383 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.897078 kubelet[2740]: E0709 13:12:37.896446 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.897078 kubelet[2740]: E0709 13:12:37.896470 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:37.897359 kubelet[2740]: E0709 13:12:37.896530 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6bz26_calico-system(e1daf40f-15e0-47a5-80e3-a298a5e667e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6bz26_calico-system(e1daf40f-15e0-47a5-80e3-a298a5e667e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54346889a80fdbba5f08ba68e10e405cfbad183896c81a6b184584a167776daa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6bz26" podUID="e1daf40f-15e0-47a5-80e3-a298a5e667e5" Jul 9 13:12:37.902506 containerd[1582]: time="2025-07-09T13:12:37.902440321Z" level=error msg="Failed to destroy network for sandbox \"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.905506 containerd[1582]: time="2025-07-09T13:12:37.905221096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-nlltf,Uid:bb191e2f-c363-4484-90a4-9625a2d502f6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.905691 kubelet[2740]: E0709 13:12:37.905644 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.905753 kubelet[2740]: E0709 13:12:37.905718 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" Jul 9 13:12:37.905789 kubelet[2740]: E0709 13:12:37.905748 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" Jul 9 13:12:37.905834 kubelet[2740]: E0709 13:12:37.905807 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d8dc767f-nlltf_calico-apiserver(bb191e2f-c363-4484-90a4-9625a2d502f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d8dc767f-nlltf_calico-apiserver(bb191e2f-c363-4484-90a4-9625a2d502f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b8f2df438793ace2c23d7435956a4f246e5d1e943b5724b8fa8546eed4d9eac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" podUID="bb191e2f-c363-4484-90a4-9625a2d502f6" Jul 9 13:12:37.907144 containerd[1582]: time="2025-07-09T13:12:37.907101951Z" level=error msg="Failed to destroy network for sandbox \"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.908634 containerd[1582]: time="2025-07-09T13:12:37.908541586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-ngpsf,Uid:7421a15b-bca1-4ab8-80b3-1f8653a0c6e0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.908980 kubelet[2740]: E0709 13:12:37.908940 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.909114 kubelet[2740]: E0709 13:12:37.908997 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" Jul 9 13:12:37.909114 kubelet[2740]: E0709 13:12:37.909016 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" Jul 9 13:12:37.909114 kubelet[2740]: E0709 13:12:37.909067 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54d8dc767f-ngpsf_calico-apiserver(7421a15b-bca1-4ab8-80b3-1f8653a0c6e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54d8dc767f-ngpsf_calico-apiserver(7421a15b-bca1-4ab8-80b3-1f8653a0c6e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c69a41ed0d28d28189a51bde5a9022d9d1b03dc247cad1dddbdd3e263cab4aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" podUID="7421a15b-bca1-4ab8-80b3-1f8653a0c6e0" Jul 9 13:12:37.913094 containerd[1582]: time="2025-07-09T13:12:37.913030060Z" level=error msg="Failed to destroy network for sandbox \"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.913632 containerd[1582]: time="2025-07-09T13:12:37.913588169Z" level=error msg="Failed to destroy network for sandbox \"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.924257 containerd[1582]: time="2025-07-09T13:12:37.924202243Z" level=error msg="Failed to destroy network for sandbox \"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.953920 containerd[1582]: time="2025-07-09T13:12:37.953859271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pcn2,Uid:d93773c1-6988-4e7a-96c1-518e39a227cb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.954221 kubelet[2740]: E0709 13:12:37.954179 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.954359 kubelet[2740]: E0709 13:12:37.954263 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9pcn2" Jul 9 13:12:37.954359 kubelet[2740]: E0709 13:12:37.954288 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9pcn2" Jul 9 13:12:37.954564 kubelet[2740]: E0709 13:12:37.954362 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9pcn2_kube-system(d93773c1-6988-4e7a-96c1-518e39a227cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9pcn2_kube-system(d93773c1-6988-4e7a-96c1-518e39a227cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab44429eeb3e9b9308eb3baaab4aac0477de67b364e65f115c8499e7b5a383ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9pcn2" podUID="d93773c1-6988-4e7a-96c1-518e39a227cb" Jul 9 13:12:37.956200 containerd[1582]: time="2025-07-09T13:12:37.955854399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kc5tm,Uid:9d5b0f59-c5bf-4645-84e1-dc5cb96628a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.956356 kubelet[2740]: E0709 13:12:37.956120 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.956356 kubelet[2740]: E0709 13:12:37.956157 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kc5tm" Jul 9 13:12:37.956356 kubelet[2740]: E0709 13:12:37.956184 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kc5tm" Jul 9 13:12:37.956481 kubelet[2740]: E0709 13:12:37.956279 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kc5tm_kube-system(9d5b0f59-c5bf-4645-84e1-dc5cb96628a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kc5tm_kube-system(9d5b0f59-c5bf-4645-84e1-dc5cb96628a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9f02903bf602d5343483319ea6be18903b95c0d2773ddde9cc6858ec18f7fad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kc5tm" podUID="9d5b0f59-c5bf-4645-84e1-dc5cb96628a5" Jul 9 13:12:37.957501 containerd[1582]: time="2025-07-09T13:12:37.957400775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c89f47c4-gmj6b,Uid:83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.957676 kubelet[2740]: E0709 13:12:37.957647 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:37.957676 kubelet[2740]: E0709 13:12:37.957679 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:37.957812 kubelet[2740]: E0709 13:12:37.957696 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:37.957812 kubelet[2740]: E0709 13:12:37.957782 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c89f47c4-gmj6b_calico-system(83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c89f47c4-gmj6b_calico-system(83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8df038265c6d1005d47472d991f60a671dc5d8062c102eaa1d1021cf6ed10ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c89f47c4-gmj6b" podUID="83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" Jul 9 13:12:39.979617 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:54242.service - OpenSSH per-connection server daemon (10.0.0.1:54242). Jul 9 13:12:40.026962 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 54242 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:40.028351 sshd-session[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:40.033442 systemd-logind[1562]: New session 8 of user core. Jul 9 13:12:40.039414 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 13:12:40.167992 sshd[3858]: Connection closed by 10.0.0.1 port 54242 Jul 9 13:12:40.168432 sshd-session[3854]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:40.173337 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:54242.service: Deactivated successfully. Jul 9 13:12:40.175785 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 13:12:40.176877 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Jul 9 13:12:40.178517 systemd-logind[1562]: Removed session 8. Jul 9 13:12:45.184727 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). Jul 9 13:12:45.299230 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:45.300902 sshd-session[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:45.305722 systemd-logind[1562]: New session 9 of user core. Jul 9 13:12:45.310378 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 13:12:45.431075 sshd[3879]: Connection closed by 10.0.0.1 port 54258 Jul 9 13:12:45.433157 sshd-session[3872]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:45.438520 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:54258.service: Deactivated successfully. Jul 9 13:12:45.440865 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 13:12:45.442684 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Jul 9 13:12:45.444439 systemd-logind[1562]: Removed session 9. Jul 9 13:12:47.236685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3640521024.mount: Deactivated successfully. Jul 9 13:12:49.033902 containerd[1582]: time="2025-07-09T13:12:49.033824721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:49.035704 containerd[1582]: time="2025-07-09T13:12:49.035642172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 9 13:12:49.037587 containerd[1582]: time="2025-07-09T13:12:49.037546717Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:49.039622 containerd[1582]: time="2025-07-09T13:12:49.039572860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:49.040039 containerd[1582]: time="2025-07-09T13:12:49.039999720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.82453491s" Jul 9 13:12:49.040039 containerd[1582]: time="2025-07-09T13:12:49.040029767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 9 13:12:49.059993 containerd[1582]: time="2025-07-09T13:12:49.059937442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:49.060832 containerd[1582]: time="2025-07-09T13:12:49.060756398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c89f47c4-gmj6b,Uid:83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:49.064163 containerd[1582]: time="2025-07-09T13:12:49.064095104Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 13:12:49.125912 containerd[1582]: time="2025-07-09T13:12:49.123744549Z" level=info msg="Container c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:49.128359 containerd[1582]: time="2025-07-09T13:12:49.128301601Z" level=error msg="Failed to destroy network for sandbox \"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.133810 containerd[1582]: time="2025-07-09T13:12:49.128344843Z" level=error msg="Failed to destroy network for sandbox \"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.165207 containerd[1582]: time="2025-07-09T13:12:49.165122179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.166514 kubelet[2740]: E0709 13:12:49.165524 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.166514 kubelet[2740]: E0709 13:12:49.165584 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:49.166514 kubelet[2740]: E0709 13:12:49.165603 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-6bz26" Jul 9 13:12:49.185163 kubelet[2740]: E0709 13:12:49.165657 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-6bz26_calico-system(e1daf40f-15e0-47a5-80e3-a298a5e667e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-6bz26_calico-system(e1daf40f-15e0-47a5-80e3-a298a5e667e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83becea70847d5e3f0f454d58474157a7b01aadefe4c08d1fa77d531450eb7fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-6bz26" podUID="e1daf40f-15e0-47a5-80e3-a298a5e667e5" Jul 9 13:12:49.188167 containerd[1582]: time="2025-07-09T13:12:49.188125142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c89f47c4-gmj6b,Uid:83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.188873 kubelet[2740]: E0709 13:12:49.188358 2740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 13:12:49.188873 kubelet[2740]: E0709 13:12:49.188445 2740 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:49.188873 kubelet[2740]: E0709 13:12:49.188474 2740 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c89f47c4-gmj6b" Jul 9 13:12:49.188961 kubelet[2740]: E0709 13:12:49.188546 2740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c89f47c4-gmj6b_calico-system(83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c89f47c4-gmj6b_calico-system(83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ff7ab4bccee2e7fd7abdc210399abe3f37b89f753a75b6956f77b22272cb088\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c89f47c4-gmj6b" podUID="83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" Jul 9 13:12:49.678604 containerd[1582]: time="2025-07-09T13:12:49.678537191Z" level=info msg="CreateContainer within sandbox \"55e2bd2284c8a1ea7ecf1f3e7554c4098dab6a70ebe28522fa0eeb13096c9050\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\"" Jul 9 13:12:49.679123 containerd[1582]: time="2025-07-09T13:12:49.679065042Z" level=info msg="StartContainer for \"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\"" Jul 9 13:12:49.681020 containerd[1582]: time="2025-07-09T13:12:49.680984965Z" level=info msg="connecting to shim c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e" address="unix:///run/containerd/s/e3b995f3a2ff50f53882b8ec7f88983c84a2ca9a0d4dfc405a8296aaac3364c8" protocol=ttrpc version=3 Jul 9 13:12:49.710488 systemd[1]: Started cri-containerd-c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e.scope - libcontainer container c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e. Jul 9 13:12:49.786738 containerd[1582]: time="2025-07-09T13:12:49.786684819Z" level=info msg="StartContainer for \"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\" returns successfully" Jul 9 13:12:49.834523 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 13:12:49.835340 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 13:12:50.046807 systemd[1]: run-netns-cni\x2d47bdc122\x2d89a1\x2d1158\x2d6106\x2dea9d22dd21fa.mount: Deactivated successfully. Jul 9 13:12:50.046908 systemd[1]: run-netns-cni\x2d44b2332a\x2d2103\x2dcd77\x2d9d79\x2d95977200582f.mount: Deactivated successfully. Jul 9 13:12:50.052108 kubelet[2740]: I0709 13:12:50.052061 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-backend-key-pair\") pod \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " Jul 9 13:12:50.052108 kubelet[2740]: I0709 13:12:50.052098 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx4fx\" (UniqueName: \"kubernetes.io/projected/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-kube-api-access-jx4fx\") pod \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " Jul 9 13:12:50.052324 kubelet[2740]: I0709 13:12:50.052118 2740 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-ca-bundle\") pod \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\" (UID: \"83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0\") " Jul 9 13:12:50.053651 kubelet[2740]: I0709 13:12:50.053612 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" (UID: "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 13:12:50.057918 kubelet[2740]: I0709 13:12:50.057581 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" (UID: "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 13:12:50.057918 kubelet[2740]: E0709 13:12:50.057642 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:50.058287 kubelet[2740]: I0709 13:12:50.058222 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-kube-api-access-jx4fx" (OuterVolumeSpecName: "kube-api-access-jx4fx") pod "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" (UID: "83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0"). InnerVolumeSpecName "kube-api-access-jx4fx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 13:12:50.058836 systemd[1]: var-lib-kubelet-pods-83c1d61b\x2de6f8\x2d4e2c\x2d9e06\x2deda1b3ca4eb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djx4fx.mount: Deactivated successfully. Jul 9 13:12:50.059153 systemd[1]: var-lib-kubelet-pods-83c1d61b\x2de6f8\x2d4e2c\x2d9e06\x2deda1b3ca4eb0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 13:12:50.061540 containerd[1582]: time="2025-07-09T13:12:50.061494062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kc5tm,Uid:9d5b0f59-c5bf-4645-84e1-dc5cb96628a5,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:50.153024 kubelet[2740]: I0709 13:12:50.152976 2740 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:50.153024 kubelet[2740]: I0709 13:12:50.153022 2740 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jx4fx\" (UniqueName: \"kubernetes.io/projected/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-kube-api-access-jx4fx\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:50.153024 kubelet[2740]: I0709 13:12:50.153034 2740 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 9 13:12:50.210571 systemd-networkd[1499]: cali9a6618a4775: Link UP Jul 9 13:12:50.210834 systemd-networkd[1499]: cali9a6618a4775: Gained carrier Jul 9 13:12:50.227993 containerd[1582]: 2025-07-09 13:12:50.084 [INFO][4023] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 13:12:50.227993 containerd[1582]: 2025-07-09 13:12:50.105 [INFO][4023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0 coredns-674b8bbfcf- kube-system 9d5b0f59-c5bf-4645-84e1-dc5cb96628a5 855 0 2025-07-09 13:12:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-kc5tm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a6618a4775 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-" Jul 9 13:12:50.227993 containerd[1582]: 2025-07-09 13:12:50.105 [INFO][4023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.227993 containerd[1582]: 2025-07-09 13:12:50.168 [INFO][4037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" HandleID="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Workload="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.169 [INFO][4037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" HandleID="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Workload="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f3f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-kc5tm", "timestamp":"2025-07-09 13:12:50.168420356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.169 [INFO][4037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.169 [INFO][4037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.169 [INFO][4037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.176 [INFO][4037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" host="localhost" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.181 [INFO][4037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.185 [INFO][4037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.186 [INFO][4037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.189 [INFO][4037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:50.228302 containerd[1582]: 2025-07-09 13:12:50.189 [INFO][4037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" host="localhost" Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.190 [INFO][4037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4 Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.194 [INFO][4037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" host="localhost" Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.199 [INFO][4037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" host="localhost" Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.199 [INFO][4037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" host="localhost" Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.199 [INFO][4037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:50.228617 containerd[1582]: 2025-07-09 13:12:50.199 [INFO][4037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" HandleID="k8s-pod-network.620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Workload="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.228784 containerd[1582]: 2025-07-09 13:12:50.202 [INFO][4023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d5b0f59-c5bf-4645-84e1-dc5cb96628a5", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-kc5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a6618a4775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:50.228885 containerd[1582]: 2025-07-09 13:12:50.202 [INFO][4023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.228885 containerd[1582]: 2025-07-09 13:12:50.202 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a6618a4775 ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.228885 containerd[1582]: 2025-07-09 13:12:50.210 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.228980 containerd[1582]: 2025-07-09 13:12:50.210 [INFO][4023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9d5b0f59-c5bf-4645-84e1-dc5cb96628a5", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4", Pod:"coredns-674b8bbfcf-kc5tm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a6618a4775", MAC:"86:cb:35:78:e0:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:50.228980 containerd[1582]: 2025-07-09 13:12:50.220 [INFO][4023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" Namespace="kube-system" Pod="coredns-674b8bbfcf-kc5tm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kc5tm-eth0" Jul 9 13:12:50.277381 systemd[1]: Removed slice kubepods-besteffort-pod83c1d61b_e6f8_4e2c_9e06_eda1b3ca4eb0.slice - libcontainer container kubepods-besteffort-pod83c1d61b_e6f8_4e2c_9e06_eda1b3ca4eb0.slice. Jul 9 13:12:50.292454 containerd[1582]: time="2025-07-09T13:12:50.292389414Z" level=info msg="connecting to shim 620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4" address="unix:///run/containerd/s/641f02a6018b1e7a9def954d37501df8c2bc4d03df09a3444fde2d2b620ab095" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:50.326639 systemd[1]: Started cri-containerd-620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4.scope - libcontainer container 620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4. Jul 9 13:12:50.348112 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:50.392196 containerd[1582]: time="2025-07-09T13:12:50.392146784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\" id:\"7bd90ac08ca4af308e799f21280ab6de5194dee92d215a5121694426c3bc718b\" pid:4089 exit_status:1 exited_at:{seconds:1752066770 nanos:391788522}" Jul 9 13:12:50.448366 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:48682.service - OpenSSH per-connection server daemon (10.0.0.1:48682). Jul 9 13:12:50.610750 containerd[1582]: time="2025-07-09T13:12:50.610694804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kc5tm,Uid:9d5b0f59-c5bf-4645-84e1-dc5cb96628a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4\"" Jul 9 13:12:50.627373 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 48682 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:50.628889 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:50.632928 systemd-logind[1562]: New session 10 of user core. Jul 9 13:12:50.639666 kubelet[2740]: E0709 13:12:50.639621 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:50.640810 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 13:12:50.725173 containerd[1582]: time="2025-07-09T13:12:50.725117166Z" level=info msg="CreateContainer within sandbox \"620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:12:50.741536 kubelet[2740]: I0709 13:12:50.741367 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-66wmv" podStartSLOduration=2.7038708380000003 podStartE2EDuration="26.741347887s" podCreationTimestamp="2025-07-09 13:12:24 +0000 UTC" firstStartedPulling="2025-07-09 13:12:25.003309469 +0000 UTC m=+26.044356812" lastFinishedPulling="2025-07-09 13:12:49.040786518 +0000 UTC m=+50.081833861" observedRunningTime="2025-07-09 13:12:50.741227411 +0000 UTC m=+51.782274754" watchObservedRunningTime="2025-07-09 13:12:50.741347887 +0000 UTC m=+51.782395230" Jul 9 13:12:50.789774 sshd[4122]: Connection closed by 10.0.0.1 port 48682 Jul 9 13:12:50.792365 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:50.797420 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:48682.service: Deactivated successfully. Jul 9 13:12:50.801196 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 13:12:50.802946 containerd[1582]: time="2025-07-09T13:12:50.802882652Z" level=info msg="Container d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:50.804581 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Jul 9 13:12:50.805730 systemd-logind[1562]: Removed session 10. Jul 9 13:12:50.810308 containerd[1582]: time="2025-07-09T13:12:50.810211577Z" level=info msg="CreateContainer within sandbox \"620c45e418bf91aa4df259e21f3cc3bdb1ff38654115473049bdf0617b30a0b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021\"" Jul 9 13:12:50.812270 containerd[1582]: time="2025-07-09T13:12:50.811282798Z" level=info msg="StartContainer for \"d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021\"" Jul 9 13:12:50.812270 containerd[1582]: time="2025-07-09T13:12:50.812085303Z" level=info msg="connecting to shim d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021" address="unix:///run/containerd/s/641f02a6018b1e7a9def954d37501df8c2bc4d03df09a3444fde2d2b620ab095" protocol=ttrpc version=3 Jul 9 13:12:50.851556 systemd[1]: Started cri-containerd-d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021.scope - libcontainer container d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021. Jul 9 13:12:50.852754 systemd[1]: Created slice kubepods-besteffort-pod91217b80_d474_4ca4_a6c5_72a163449645.slice - libcontainer container kubepods-besteffort-pod91217b80_d474_4ca4_a6c5_72a163449645.slice. Jul 9 13:12:50.886839 containerd[1582]: time="2025-07-09T13:12:50.886716267Z" level=info msg="StartContainer for \"d83cacb4aa019cc5e31b7abbc12b8367b81197227def30aebd5854ac31c57021\" returns successfully" Jul 9 13:12:50.962490 kubelet[2740]: I0709 13:12:50.962423 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdvzj\" (UniqueName: \"kubernetes.io/projected/91217b80-d474-4ca4-a6c5-72a163449645-kube-api-access-rdvzj\") pod \"whisker-5c56489cbb-shmtp\" (UID: \"91217b80-d474-4ca4-a6c5-72a163449645\") " pod="calico-system/whisker-5c56489cbb-shmtp" Jul 9 13:12:50.962490 kubelet[2740]: I0709 13:12:50.962476 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91217b80-d474-4ca4-a6c5-72a163449645-whisker-backend-key-pair\") pod \"whisker-5c56489cbb-shmtp\" (UID: \"91217b80-d474-4ca4-a6c5-72a163449645\") " pod="calico-system/whisker-5c56489cbb-shmtp" Jul 9 13:12:50.962490 kubelet[2740]: I0709 13:12:50.962496 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91217b80-d474-4ca4-a6c5-72a163449645-whisker-ca-bundle\") pod \"whisker-5c56489cbb-shmtp\" (UID: \"91217b80-d474-4ca4-a6c5-72a163449645\") " pod="calico-system/whisker-5c56489cbb-shmtp" Jul 9 13:12:51.061679 kubelet[2740]: I0709 13:12:51.061571 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0" path="/var/lib/kubelet/pods/83c1d61b-e6f8-4e2c-9e06-eda1b3ca4eb0/volumes" Jul 9 13:12:51.063264 kubelet[2740]: E0709 13:12:51.062681 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:51.063751 containerd[1582]: time="2025-07-09T13:12:51.063540745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pcn2,Uid:d93773c1-6988-4e7a-96c1-518e39a227cb,Namespace:kube-system,Attempt:0,}" Jul 9 13:12:51.070485 containerd[1582]: time="2025-07-09T13:12:51.068734242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqgmx,Uid:2d4b2d60-521c-4619-9873-7765068c2eae,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:51.157212 containerd[1582]: time="2025-07-09T13:12:51.157081077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c56489cbb-shmtp,Uid:91217b80-d474-4ca4-a6c5-72a163449645,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:51.264183 kubelet[2740]: E0709 13:12:51.263862 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:51.315416 systemd-networkd[1499]: cali9a6618a4775: Gained IPv6LL Jul 9 13:12:51.382117 containerd[1582]: time="2025-07-09T13:12:51.382050805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\" id:\"d79079fb1a4a6a3ffe2a12a88e1af15503b6d3bec457ae67037977a5335acc98\" pid:4313 exit_status:1 exited_at:{seconds:1752066771 nanos:381468662}" Jul 9 13:12:51.449434 kubelet[2740]: I0709 13:12:51.449217 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kc5tm" podStartSLOduration=45.449197064 podStartE2EDuration="45.449197064s" podCreationTimestamp="2025-07-09 13:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:51.362447216 +0000 UTC m=+52.403494559" watchObservedRunningTime="2025-07-09 13:12:51.449197064 +0000 UTC m=+52.490244407" Jul 9 13:12:51.560465 systemd-networkd[1499]: cali589537e2cca: Link UP Jul 9 13:12:51.560697 systemd-networkd[1499]: cali589537e2cca: Gained carrier Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.131 [INFO][4211] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.352 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0 coredns-674b8bbfcf- kube-system d93773c1-6988-4e7a-96c1-518e39a227cb 860 0 2025-07-09 13:12:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9pcn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali589537e2cca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.352 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.399 [INFO][4339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" HandleID="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Workload="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.399 [INFO][4339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" HandleID="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Workload="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012aaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9pcn2", "timestamp":"2025-07-09 13:12:51.399151787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.399 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.399 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.399 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.436 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.450 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.525 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.529 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.533 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.533 [INFO][4339] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.538 [INFO][4339] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04 Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.547 [INFO][4339] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.552 [INFO][4339] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.553 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" host="localhost" Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.553 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:51.581680 containerd[1582]: 2025-07-09 13:12:51.553 [INFO][4339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" HandleID="k8s-pod-network.d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Workload="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.556 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d93773c1-6988-4e7a-96c1-518e39a227cb", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9pcn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali589537e2cca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.557 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.557 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali589537e2cca ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.560 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.561 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d93773c1-6988-4e7a-96c1-518e39a227cb", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04", Pod:"coredns-674b8bbfcf-9pcn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali589537e2cca", MAC:"42:50:c6:c5:24:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.582513 containerd[1582]: 2025-07-09 13:12:51.578 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" Namespace="kube-system" Pod="coredns-674b8bbfcf-9pcn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9pcn2-eth0" Jul 9 13:12:51.609711 containerd[1582]: time="2025-07-09T13:12:51.609657250Z" level=info msg="connecting to shim d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04" address="unix:///run/containerd/s/7305133221c0c3cc124fcac343d1582eb00397a5b74c35b4ced2a0b26698854e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:51.639377 systemd[1]: Started cri-containerd-d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04.scope - libcontainer container d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04. Jul 9 13:12:51.657388 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:51.686352 systemd-networkd[1499]: calicd89a46a224: Link UP Jul 9 13:12:51.689405 systemd-networkd[1499]: calicd89a46a224: Gained carrier Jul 9 13:12:51.700663 containerd[1582]: time="2025-07-09T13:12:51.700569820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9pcn2,Uid:d93773c1-6988-4e7a-96c1-518e39a227cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04\"" Jul 9 13:12:51.702431 kubelet[2740]: E0709 13:12:51.702392 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.128 [INFO][4189] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.148 [INFO][4189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hqgmx-eth0 csi-node-driver- calico-system 2d4b2d60-521c-4619-9873-7765068c2eae 727 0 2025-07-09 13:12:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hqgmx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd89a46a224 [] [] }} ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.149 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.407 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" HandleID="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Workload="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.407 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" HandleID="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Workload="localhost-k8s-csi--node--driver--hqgmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b3da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hqgmx", "timestamp":"2025-07-09 13:12:51.407576547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.407 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.554 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.556 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.585 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.646 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.657 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.660 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.663 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.664 [INFO][4336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.665 [INFO][4336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635 Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.671 [INFO][4336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.677 [INFO][4336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.677 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" host="localhost" Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.677 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:51.705563 containerd[1582]: 2025-07-09 13:12:51.677 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" HandleID="k8s-pod-network.56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Workload="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.682 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hqgmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d4b2d60-521c-4619-9873-7765068c2eae", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hqgmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd89a46a224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.682 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.682 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd89a46a224 ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.687 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.688 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hqgmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d4b2d60-521c-4619-9873-7765068c2eae", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635", Pod:"csi-node-driver-hqgmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd89a46a224", MAC:"46:ce:4a:04:a1:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.707667 containerd[1582]: 2025-07-09 13:12:51.700 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" Namespace="calico-system" Pod="csi-node-driver-hqgmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--hqgmx-eth0" Jul 9 13:12:51.707862 containerd[1582]: time="2025-07-09T13:12:51.707821779Z" level=info msg="CreateContainer within sandbox \"d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 13:12:51.721704 containerd[1582]: time="2025-07-09T13:12:51.721652074Z" level=info msg="Container 8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:51.729082 containerd[1582]: time="2025-07-09T13:12:51.729047913Z" level=info msg="CreateContainer within sandbox \"d07689bf575bc4c7044168e0173a7b3dd3fda30f6888d05c00b6d7c6dfd0ea04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7\"" Jul 9 13:12:51.729582 containerd[1582]: time="2025-07-09T13:12:51.729552891Z" level=info msg="StartContainer for \"8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7\"" Jul 9 13:12:51.730632 containerd[1582]: time="2025-07-09T13:12:51.730599424Z" level=info msg="connecting to shim 8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7" address="unix:///run/containerd/s/7305133221c0c3cc124fcac343d1582eb00397a5b74c35b4ced2a0b26698854e" protocol=ttrpc version=3 Jul 9 13:12:51.755397 systemd[1]: Started cri-containerd-8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7.scope - libcontainer container 8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7. Jul 9 13:12:51.760710 containerd[1582]: time="2025-07-09T13:12:51.760667442Z" level=info msg="connecting to shim 56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635" address="unix:///run/containerd/s/f4a05ff28a20d157d2d41b7d1875dc9562ec878da375731f897d035d3fdcc769" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:51.788813 systemd-networkd[1499]: cali28b63fef2bb: Link UP Jul 9 13:12:51.789388 systemd-networkd[1499]: cali28b63fef2bb: Gained carrier Jul 9 13:12:51.792385 systemd[1]: Started cri-containerd-56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635.scope - libcontainer container 56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635. Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.403 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.451 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c56489cbb--shmtp-eth0 whisker-5c56489cbb- calico-system 91217b80-d474-4ca4-a6c5-72a163449645 1011 0 2025-07-09 13:12:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c56489cbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c56489cbb-shmtp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali28b63fef2bb [] [] }} ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.451 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.545 [INFO][4380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" HandleID="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Workload="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.546 [INFO][4380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" HandleID="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Workload="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021e910), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c56489cbb-shmtp", "timestamp":"2025-07-09 13:12:51.545962878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.546 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.677 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.678 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.686 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.747 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.753 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.758 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.763 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.763 [INFO][4380] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.766 [INFO][4380] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.773 [INFO][4380] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.779 [INFO][4380] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.780 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" host="localhost" Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.780 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:51.806671 containerd[1582]: 2025-07-09 13:12:51.780 [INFO][4380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" HandleID="k8s-pod-network.d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Workload="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.786 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c56489cbb--shmtp-eth0", GenerateName:"whisker-5c56489cbb-", Namespace:"calico-system", SelfLink:"", UID:"91217b80-d474-4ca4-a6c5-72a163449645", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c56489cbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c56489cbb-shmtp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28b63fef2bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.787 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.787 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b63fef2bb ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.789 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.789 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c56489cbb--shmtp-eth0", GenerateName:"whisker-5c56489cbb-", Namespace:"calico-system", SelfLink:"", UID:"91217b80-d474-4ca4-a6c5-72a163449645", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c56489cbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa", Pod:"whisker-5c56489cbb-shmtp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali28b63fef2bb", MAC:"a6:e0:4c:74:76:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:51.808287 containerd[1582]: 2025-07-09 13:12:51.802 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" Namespace="calico-system" Pod="whisker-5c56489cbb-shmtp" WorkloadEndpoint="localhost-k8s-whisker--5c56489cbb--shmtp-eth0" Jul 9 13:12:51.812512 containerd[1582]: time="2025-07-09T13:12:51.812466700Z" level=info msg="StartContainer for \"8e83892f314fa19bcd2ccfaee93a9ae6a8c207cdd8718b81eb2ddd8f38c80ac7\" returns successfully" Jul 9 13:12:51.821628 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:51.837527 containerd[1582]: time="2025-07-09T13:12:51.837452970Z" level=info msg="connecting to shim d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa" address="unix:///run/containerd/s/5375a03ab473c0f76bd6f5518e55e44fe2736dfdaf79e4792f8445490d82dcbf" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:51.852282 containerd[1582]: time="2025-07-09T13:12:51.852197380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hqgmx,Uid:2d4b2d60-521c-4619-9873-7765068c2eae,Namespace:calico-system,Attempt:0,} returns sandbox id \"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635\"" Jul 9 13:12:51.858354 containerd[1582]: time="2025-07-09T13:12:51.857530038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 13:12:51.871523 systemd[1]: Started cri-containerd-d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa.scope - libcontainer container d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa. Jul 9 13:12:51.889742 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:51.928717 containerd[1582]: time="2025-07-09T13:12:51.928669501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c56489cbb-shmtp,Uid:91217b80-d474-4ca4-a6c5-72a163449645,Namespace:calico-system,Attempt:0,} returns sandbox id \"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa\"" Jul 9 13:12:51.955561 systemd-networkd[1499]: vxlan.calico: Link UP Jul 9 13:12:51.955570 systemd-networkd[1499]: vxlan.calico: Gained carrier Jul 9 13:12:52.269013 kubelet[2740]: E0709 13:12:52.268903 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:52.269519 kubelet[2740]: E0709 13:12:52.269491 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:52.291655 kubelet[2740]: I0709 13:12:52.291594 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9pcn2" podStartSLOduration=46.291573841 podStartE2EDuration="46.291573841s" podCreationTimestamp="2025-07-09 13:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 13:12:52.278937419 +0000 UTC m=+53.319984762" watchObservedRunningTime="2025-07-09 13:12:52.291573841 +0000 UTC m=+53.332621185" Jul 9 13:12:52.658448 systemd-networkd[1499]: cali589537e2cca: Gained IPv6LL Jul 9 13:12:53.058484 containerd[1582]: time="2025-07-09T13:12:53.058099861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-ngpsf,Uid:7421a15b-bca1-4ab8-80b3-1f8653a0c6e0,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:12:53.058484 containerd[1582]: time="2025-07-09T13:12:53.058193687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-nlltf,Uid:bb191e2f-c363-4484-90a4-9625a2d502f6,Namespace:calico-apiserver,Attempt:0,}" Jul 9 13:12:53.059007 containerd[1582]: time="2025-07-09T13:12:53.058099751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76484857f5-49hts,Uid:49893d27-f7b5-4c17-84a6-fe21e163be5c,Namespace:calico-system,Attempt:0,}" Jul 9 13:12:53.210747 systemd-networkd[1499]: calibbfb9d0cf56: Link UP Jul 9 13:12:53.213432 systemd-networkd[1499]: calibbfb9d0cf56: Gained carrier Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.105 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0 calico-apiserver-54d8dc767f- calico-apiserver bb191e2f-c363-4484-90a4-9625a2d502f6 858 0 2025-07-09 13:12:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d8dc767f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d8dc767f-nlltf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbfb9d0cf56 [] [] }} ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.106 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.158 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" HandleID="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Workload="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.159 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" HandleID="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Workload="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000506860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d8dc767f-nlltf", "timestamp":"2025-07-09 13:12:53.158750135 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.159 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.159 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.159 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.166 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.172 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.178 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.182 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.185 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.185 [INFO][4712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.188 [INFO][4712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.194 [INFO][4712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" host="localhost" Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:53.267158 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" HandleID="k8s-pod-network.41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Workload="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.206 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0", GenerateName:"calico-apiserver-54d8dc767f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb191e2f-c363-4484-90a4-9625a2d502f6", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d8dc767f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d8dc767f-nlltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbfb9d0cf56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.206 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.206 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbfb9d0cf56 ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.215 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.217 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0", GenerateName:"calico-apiserver-54d8dc767f-", Namespace:"calico-apiserver", SelfLink:"", UID:"bb191e2f-c363-4484-90a4-9625a2d502f6", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d8dc767f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c", Pod:"calico-apiserver-54d8dc767f-nlltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbfb9d0cf56", MAC:"72:9f:d8:eb:63:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.268007 containerd[1582]: 2025-07-09 13:12:53.240 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-nlltf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--nlltf-eth0" Jul 9 13:12:53.284717 kubelet[2740]: E0709 13:12:53.284642 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:53.286318 kubelet[2740]: E0709 13:12:53.284726 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:53.328787 systemd-networkd[1499]: cali1cd04414c21: Link UP Jul 9 13:12:53.329131 systemd-networkd[1499]: cali1cd04414c21: Gained carrier Jul 9 13:12:53.333041 containerd[1582]: time="2025-07-09T13:12:53.332979126Z" level=info msg="connecting to shim 41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c" address="unix:///run/containerd/s/79d4539e74f233bca85fe4dce86e12356cdf0b847f704cffc6e5163606956e45" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.127 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0 calico-kube-controllers-76484857f5- calico-system 49893d27-f7b5-4c17-84a6-fe21e163be5c 847 0 2025-07-09 13:12:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76484857f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76484857f5-49hts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1cd04414c21 [] [] }} ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.127 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.177 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" HandleID="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Workload="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.177 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" HandleID="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Workload="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76484857f5-49hts", "timestamp":"2025-07-09 13:12:53.176977418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.177 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.202 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.271 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.285 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.297 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.300 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.302 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.302 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.304 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4 Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.309 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" host="localhost" Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:53.353907 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" HandleID="k8s-pod-network.2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Workload="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.324 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0", GenerateName:"calico-kube-controllers-76484857f5-", Namespace:"calico-system", SelfLink:"", UID:"49893d27-f7b5-4c17-84a6-fe21e163be5c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76484857f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76484857f5-49hts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd04414c21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.325 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.325 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cd04414c21 ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.327 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.327 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0", GenerateName:"calico-kube-controllers-76484857f5-", Namespace:"calico-system", SelfLink:"", UID:"49893d27-f7b5-4c17-84a6-fe21e163be5c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76484857f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4", Pod:"calico-kube-controllers-76484857f5-49hts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1cd04414c21", MAC:"da:42:b1:c9:88:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.354905 containerd[1582]: 2025-07-09 13:12:53.347 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" Namespace="calico-system" Pod="calico-kube-controllers-76484857f5-49hts" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76484857f5--49hts-eth0" Jul 9 13:12:53.371429 systemd[1]: Started cri-containerd-41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c.scope - libcontainer container 41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c. Jul 9 13:12:53.395514 containerd[1582]: time="2025-07-09T13:12:53.394806411Z" level=info msg="connecting to shim 2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4" address="unix:///run/containerd/s/8ea2284559583ff7e0ae32ccef594809fa77643552f151122a37b010e5ece12a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:53.420984 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:53.421697 systemd-networkd[1499]: calidd46ed02f2d: Link UP Jul 9 13:12:53.423396 systemd-networkd[1499]: calidd46ed02f2d: Gained carrier Jul 9 13:12:53.427368 systemd-networkd[1499]: cali28b63fef2bb: Gained IPv6LL Jul 9 13:12:53.433674 systemd[1]: Started cri-containerd-2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4.scope - libcontainer container 2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4. Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.143 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0 calico-apiserver-54d8dc767f- calico-apiserver 7421a15b-bca1-4ab8-80b3-1f8653a0c6e0 856 0 2025-07-09 13:12:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54d8dc767f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54d8dc767f-ngpsf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd46ed02f2d [] [] }} ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.144 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.194 [INFO][4731] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" HandleID="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Workload="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.195 [INFO][4731] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" HandleID="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Workload="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54d8dc767f-ngpsf", "timestamp":"2025-07-09 13:12:53.194816031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.195 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.314 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.372 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.392 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.399 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.401 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.403 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.403 [INFO][4731] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.405 [INFO][4731] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0 Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.409 [INFO][4731] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.415 [INFO][4731] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.415 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" host="localhost" Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.415 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:12:53.451115 containerd[1582]: 2025-07-09 13:12:53.415 [INFO][4731] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" HandleID="k8s-pod-network.188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Workload="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.419 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0", GenerateName:"calico-apiserver-54d8dc767f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7421a15b-bca1-4ab8-80b3-1f8653a0c6e0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d8dc767f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54d8dc767f-ngpsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd46ed02f2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.419 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.419 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd46ed02f2d ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.424 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.429 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0", GenerateName:"calico-apiserver-54d8dc767f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7421a15b-bca1-4ab8-80b3-1f8653a0c6e0", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54d8dc767f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0", Pod:"calico-apiserver-54d8dc767f-ngpsf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd46ed02f2d", MAC:"6a:ca:00:f2:75:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:12:53.453040 containerd[1582]: 2025-07-09 13:12:53.446 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" Namespace="calico-apiserver" Pod="calico-apiserver-54d8dc767f-ngpsf" WorkloadEndpoint="localhost-k8s-calico--apiserver--54d8dc767f--ngpsf-eth0" Jul 9 13:12:53.466483 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:53.485735 containerd[1582]: time="2025-07-09T13:12:53.485597475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-nlltf,Uid:bb191e2f-c363-4484-90a4-9625a2d502f6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c\"" Jul 9 13:12:53.486982 containerd[1582]: time="2025-07-09T13:12:53.486869572Z" level=info msg="connecting to shim 188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0" address="unix:///run/containerd/s/a73a56b156cd55db2c2b4d3cf6871353b39f77bc442e51044016e30fbe5cd7b7" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:12:53.510166 containerd[1582]: time="2025-07-09T13:12:53.509970150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76484857f5-49hts,Uid:49893d27-f7b5-4c17-84a6-fe21e163be5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4\"" Jul 9 13:12:53.518431 systemd[1]: Started cri-containerd-188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0.scope - libcontainer container 188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0. Jul 9 13:12:53.534478 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:12:53.562515 containerd[1582]: time="2025-07-09T13:12:53.562472627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 9 13:12:53.562766 containerd[1582]: time="2025-07-09T13:12:53.562547869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:53.569979 containerd[1582]: time="2025-07-09T13:12:53.569935240Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:53.636971 containerd[1582]: time="2025-07-09T13:12:53.636815978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54d8dc767f-ngpsf,Uid:7421a15b-bca1-4ab8-80b3-1f8653a0c6e0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0\"" Jul 9 13:12:53.638486 containerd[1582]: time="2025-07-09T13:12:53.638428844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:53.639028 containerd[1582]: time="2025-07-09T13:12:53.638993063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.780672663s" Jul 9 13:12:53.639028 containerd[1582]: time="2025-07-09T13:12:53.639026827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 9 13:12:53.641131 containerd[1582]: time="2025-07-09T13:12:53.641109384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 13:12:53.645648 containerd[1582]: time="2025-07-09T13:12:53.645622894Z" level=info msg="CreateContainer within sandbox \"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 13:12:53.657340 containerd[1582]: time="2025-07-09T13:12:53.657294535Z" level=info msg="Container 65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:53.665480 containerd[1582]: time="2025-07-09T13:12:53.665442504Z" level=info msg="CreateContainer within sandbox \"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768\"" Jul 9 13:12:53.666039 containerd[1582]: time="2025-07-09T13:12:53.665986786Z" level=info msg="StartContainer for \"65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768\"" Jul 9 13:12:53.667693 containerd[1582]: time="2025-07-09T13:12:53.667652962Z" level=info msg="connecting to shim 65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768" address="unix:///run/containerd/s/f4a05ff28a20d157d2d41b7d1875dc9562ec878da375731f897d035d3fdcc769" protocol=ttrpc version=3 Jul 9 13:12:53.682394 systemd-networkd[1499]: calicd89a46a224: Gained IPv6LL Jul 9 13:12:53.690385 systemd[1]: Started cri-containerd-65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768.scope - libcontainer container 65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768. Jul 9 13:12:53.735392 containerd[1582]: time="2025-07-09T13:12:53.735288075Z" level=info msg="StartContainer for \"65a7508ed7497725eebac6555dc9040dfe59768e935aacbeb9278a8724f9b768\" returns successfully" Jul 9 13:12:53.938486 systemd-networkd[1499]: vxlan.calico: Gained IPv6LL Jul 9 13:12:54.287901 kubelet[2740]: E0709 13:12:54.287776 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:12:54.514452 systemd-networkd[1499]: calibbfb9d0cf56: Gained IPv6LL Jul 9 13:12:55.026452 systemd-networkd[1499]: calidd46ed02f2d: Gained IPv6LL Jul 9 13:12:55.090512 systemd-networkd[1499]: cali1cd04414c21: Gained IPv6LL Jul 9 13:12:55.560813 containerd[1582]: time="2025-07-09T13:12:55.560759156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:55.561726 containerd[1582]: time="2025-07-09T13:12:55.561697668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 9 13:12:55.563112 containerd[1582]: time="2025-07-09T13:12:55.563058671Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:55.565374 containerd[1582]: time="2025-07-09T13:12:55.565322719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:55.566199 containerd[1582]: time="2025-07-09T13:12:55.566167815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.924928206s" Jul 9 13:12:55.566199 containerd[1582]: time="2025-07-09T13:12:55.566197691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 9 13:12:55.567144 containerd[1582]: time="2025-07-09T13:12:55.567118078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 13:12:55.571657 containerd[1582]: time="2025-07-09T13:12:55.571501202Z" level=info msg="CreateContainer within sandbox \"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 13:12:55.580483 containerd[1582]: time="2025-07-09T13:12:55.580435364Z" level=info msg="Container 08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:55.594346 containerd[1582]: time="2025-07-09T13:12:55.594314806Z" level=info msg="CreateContainer within sandbox \"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7\"" Jul 9 13:12:55.594877 containerd[1582]: time="2025-07-09T13:12:55.594841114Z" level=info msg="StartContainer for \"08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7\"" Jul 9 13:12:55.596130 containerd[1582]: time="2025-07-09T13:12:55.596106518Z" level=info msg="connecting to shim 08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7" address="unix:///run/containerd/s/5375a03ab473c0f76bd6f5518e55e44fe2736dfdaf79e4792f8445490d82dcbf" protocol=ttrpc version=3 Jul 9 13:12:55.623515 systemd[1]: Started cri-containerd-08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7.scope - libcontainer container 08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7. Jul 9 13:12:55.672321 containerd[1582]: time="2025-07-09T13:12:55.672275632Z" level=info msg="StartContainer for \"08978aac92a9d5e462d64a2b33a6f3769a86e76c1f3b6f1ff0b0f452be6194f7\" returns successfully" Jul 9 13:12:55.816939 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:48690.service - OpenSSH per-connection server daemon (10.0.0.1:48690). Jul 9 13:12:55.880907 sshd[4985]: Accepted publickey for core from 10.0.0.1 port 48690 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:55.882969 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:55.887433 systemd-logind[1562]: New session 11 of user core. Jul 9 13:12:55.902455 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 13:12:56.027555 sshd[4988]: Connection closed by 10.0.0.1 port 48690 Jul 9 13:12:56.027948 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:56.041795 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:48690.service: Deactivated successfully. Jul 9 13:12:56.044043 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 13:12:56.044969 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Jul 9 13:12:56.046995 systemd-logind[1562]: Removed session 11. Jul 9 13:12:56.048936 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:48696.service - OpenSSH per-connection server daemon (10.0.0.1:48696). Jul 9 13:12:56.098531 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 48696 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:56.100369 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:56.104978 systemd-logind[1562]: New session 12 of user core. Jul 9 13:12:56.115388 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 13:12:56.280929 sshd[5005]: Connection closed by 10.0.0.1 port 48696 Jul 9 13:12:56.281445 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:56.297346 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:48696.service: Deactivated successfully. Jul 9 13:12:56.299928 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 13:12:56.300927 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Jul 9 13:12:56.304611 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:48708.service - OpenSSH per-connection server daemon (10.0.0.1:48708). Jul 9 13:12:56.306320 systemd-logind[1562]: Removed session 12. Jul 9 13:12:56.365476 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 48708 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:12:56.367593 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:12:56.372753 systemd-logind[1562]: New session 13 of user core. Jul 9 13:12:56.380402 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 13:12:56.683976 sshd[5020]: Connection closed by 10.0.0.1 port 48708 Jul 9 13:12:56.684361 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jul 9 13:12:56.689449 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:48708.service: Deactivated successfully. Jul 9 13:12:56.692131 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 13:12:56.693135 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Jul 9 13:12:56.694451 systemd-logind[1562]: Removed session 13. Jul 9 13:12:58.744750 containerd[1582]: time="2025-07-09T13:12:58.744695942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:58.745528 containerd[1582]: time="2025-07-09T13:12:58.745501654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 9 13:12:58.746710 containerd[1582]: time="2025-07-09T13:12:58.746674354Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:58.748936 containerd[1582]: time="2025-07-09T13:12:58.748869742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:12:58.749512 containerd[1582]: time="2025-07-09T13:12:58.749482312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.182201408s" Jul 9 13:12:58.749512 containerd[1582]: time="2025-07-09T13:12:58.749513460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 9 13:12:58.757997 containerd[1582]: time="2025-07-09T13:12:58.757887881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 13:12:58.784095 containerd[1582]: time="2025-07-09T13:12:58.783966155Z" level=info msg="CreateContainer within sandbox \"41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 13:12:58.795183 containerd[1582]: time="2025-07-09T13:12:58.795129608Z" level=info msg="Container a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:12:58.806278 containerd[1582]: time="2025-07-09T13:12:58.806211929Z" level=info msg="CreateContainer within sandbox \"41e03eda8f524e8f024a9109f5ec24950d6d5fc7c946e4169167f64306c2996c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc\"" Jul 9 13:12:58.806949 containerd[1582]: time="2025-07-09T13:12:58.806897866Z" level=info msg="StartContainer for \"a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc\"" Jul 9 13:12:58.808085 containerd[1582]: time="2025-07-09T13:12:58.808042493Z" level=info msg="connecting to shim a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc" address="unix:///run/containerd/s/79d4539e74f233bca85fe4dce86e12356cdf0b847f704cffc6e5163606956e45" protocol=ttrpc version=3 Jul 9 13:12:58.838490 systemd[1]: Started cri-containerd-a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc.scope - libcontainer container a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc. Jul 9 13:12:58.894537 containerd[1582]: time="2025-07-09T13:12:58.894475275Z" level=info msg="StartContainer for \"a3de8fdf6c5f19d8fbed5851989fd4b917804456ed3bbd7a7fedd4ca2edb1ccc\" returns successfully" Jul 9 13:12:59.325533 kubelet[2740]: I0709 13:12:59.325465 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54d8dc767f-nlltf" podStartSLOduration=32.055241952 podStartE2EDuration="37.325448017s" podCreationTimestamp="2025-07-09 13:12:22 +0000 UTC" firstStartedPulling="2025-07-09 13:12:53.487517899 +0000 UTC m=+54.528565232" lastFinishedPulling="2025-07-09 13:12:58.757723954 +0000 UTC m=+59.798771297" observedRunningTime="2025-07-09 13:12:59.324072707 +0000 UTC m=+60.365120050" watchObservedRunningTime="2025-07-09 13:12:59.325448017 +0000 UTC m=+60.366495360" Jul 9 13:13:00.058689 containerd[1582]: time="2025-07-09T13:13:00.058605121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,}" Jul 9 13:13:00.155626 systemd-networkd[1499]: cali41acd5efd95: Link UP Jul 9 13:13:00.156207 systemd-networkd[1499]: cali41acd5efd95: Gained carrier Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.096 [INFO][5091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--6bz26-eth0 goldmane-768f4c5c69- calico-system e1daf40f-15e0-47a5-80e3-a298a5e667e5 857 0 2025-07-09 13:12:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-6bz26 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali41acd5efd95 [] [] }} ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.096 [INFO][5091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.119 [INFO][5106] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" HandleID="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Workload="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.120 [INFO][5106] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" HandleID="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Workload="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-6bz26", "timestamp":"2025-07-09 13:13:00.119869658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.120 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.120 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.120 [INFO][5106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.126 [INFO][5106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.131 [INFO][5106] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.136 [INFO][5106] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.137 [INFO][5106] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.139 [INFO][5106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.140 [INFO][5106] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.141 [INFO][5106] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412 Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.144 [INFO][5106] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.150 [INFO][5106] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.150 [INFO][5106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" host="localhost" Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.150 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 13:13:00.172387 containerd[1582]: 2025-07-09 13:13:00.150 [INFO][5106] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" HandleID="k8s-pod-network.156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Workload="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.153 [INFO][5091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6bz26-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e1daf40f-15e0-47a5-80e3-a298a5e667e5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-6bz26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41acd5efd95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.153 [INFO][5091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.153 [INFO][5091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41acd5efd95 ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.156 [INFO][5091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.156 [INFO][5091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--6bz26-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e1daf40f-15e0-47a5-80e3-a298a5e667e5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 13, 12, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412", Pod:"goldmane-768f4c5c69-6bz26", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali41acd5efd95", MAC:"b2:dd:ce:94:f7:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 13:13:00.173357 containerd[1582]: 2025-07-09 13:13:00.165 [INFO][5091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" Namespace="calico-system" Pod="goldmane-768f4c5c69-6bz26" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--6bz26-eth0" Jul 9 13:13:00.197571 containerd[1582]: time="2025-07-09T13:13:00.197414325Z" level=info msg="connecting to shim 156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412" address="unix:///run/containerd/s/75737401c68a92a6288badb47353833fe82372a60b61de2e821f14aa962d291d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 13:13:00.233411 systemd[1]: Started cri-containerd-156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412.scope - libcontainer container 156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412. Jul 9 13:13:00.247294 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 13:13:00.281307 containerd[1582]: time="2025-07-09T13:13:00.281260153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-6bz26,Uid:e1daf40f-15e0-47a5-80e3-a298a5e667e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412\"" Jul 9 13:13:00.310683 kubelet[2740]: I0709 13:13:00.310564 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:13:01.426944 systemd-networkd[1499]: cali41acd5efd95: Gained IPv6LL Jul 9 13:13:01.701408 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:55726.service - OpenSSH per-connection server daemon (10.0.0.1:55726). Jul 9 13:13:01.858440 sshd[5175]: Accepted publickey for core from 10.0.0.1 port 55726 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:01.860013 sshd-session[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:01.864222 systemd-logind[1562]: New session 14 of user core. Jul 9 13:13:01.876372 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 13:13:01.895657 containerd[1582]: time="2025-07-09T13:13:01.895598664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:01.896632 containerd[1582]: time="2025-07-09T13:13:01.896606882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 9 13:13:01.897921 containerd[1582]: time="2025-07-09T13:13:01.897893435Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:01.903654 containerd[1582]: time="2025-07-09T13:13:01.903613787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:01.904165 containerd[1582]: time="2025-07-09T13:13:01.904118507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.146192464s" Jul 9 13:13:01.904165 containerd[1582]: time="2025-07-09T13:13:01.904159905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 9 13:13:01.905162 containerd[1582]: time="2025-07-09T13:13:01.905118069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 13:13:01.918087 containerd[1582]: time="2025-07-09T13:13:01.918033480Z" level=info msg="CreateContainer within sandbox \"2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 13:13:01.933493 containerd[1582]: time="2025-07-09T13:13:01.932620287Z" level=info msg="Container c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:13:01.942423 containerd[1582]: time="2025-07-09T13:13:01.942226466Z" level=info msg="CreateContainer within sandbox \"2cda6fce1908d06f36cf69d50d92d62e795be2807aee6340630619d7e6d1baf4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\"" Jul 9 13:13:01.944925 containerd[1582]: time="2025-07-09T13:13:01.942862794Z" level=info msg="StartContainer for \"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\"" Jul 9 13:13:01.945101 containerd[1582]: time="2025-07-09T13:13:01.945075200Z" level=info msg="connecting to shim c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384" address="unix:///run/containerd/s/8ea2284559583ff7e0ae32ccef594809fa77643552f151122a37b010e5ece12a" protocol=ttrpc version=3 Jul 9 13:13:02.000595 systemd[1]: Started cri-containerd-c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384.scope - libcontainer container c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384. Jul 9 13:13:02.033327 sshd[5178]: Connection closed by 10.0.0.1 port 55726 Jul 9 13:13:02.033171 sshd-session[5175]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:02.036669 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:55726.service: Deactivated successfully. Jul 9 13:13:02.039253 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 13:13:02.041066 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Jul 9 13:13:02.044698 systemd-logind[1562]: Removed session 14. Jul 9 13:13:02.060248 containerd[1582]: time="2025-07-09T13:13:02.060198700Z" level=info msg="StartContainer for \"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\" returns successfully" Jul 9 13:13:02.328711 kubelet[2740]: I0709 13:13:02.328528 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76484857f5-49hts" podStartSLOduration=29.936281995 podStartE2EDuration="38.328509073s" podCreationTimestamp="2025-07-09 13:12:24 +0000 UTC" firstStartedPulling="2025-07-09 13:12:53.512747672 +0000 UTC m=+54.553795015" lastFinishedPulling="2025-07-09 13:13:01.90497475 +0000 UTC m=+62.946022093" observedRunningTime="2025-07-09 13:13:02.328152311 +0000 UTC m=+63.369199654" watchObservedRunningTime="2025-07-09 13:13:02.328509073 +0000 UTC m=+63.369556416" Jul 9 13:13:02.374035 containerd[1582]: time="2025-07-09T13:13:02.373969206Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\" id:\"34350c401722d298cd67260e3018aee7740437e6ef02884912221315633a9192\" pid:5255 exited_at:{seconds:1752066782 nanos:362789894}" Jul 9 13:13:02.469490 containerd[1582]: time="2025-07-09T13:13:02.469431330Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:02.470421 containerd[1582]: time="2025-07-09T13:13:02.470357918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 9 13:13:02.472029 containerd[1582]: time="2025-07-09T13:13:02.471983520Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 566.832358ms" Jul 9 13:13:02.472029 containerd[1582]: time="2025-07-09T13:13:02.472021223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 9 13:13:02.473186 containerd[1582]: time="2025-07-09T13:13:02.472952230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 13:13:02.477496 containerd[1582]: time="2025-07-09T13:13:02.477455434Z" level=info msg="CreateContainer within sandbox \"188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 13:13:02.485715 containerd[1582]: time="2025-07-09T13:13:02.485669906Z" level=info msg="Container 979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:13:02.495152 containerd[1582]: time="2025-07-09T13:13:02.495097832Z" level=info msg="CreateContainer within sandbox \"188a3a4d22e79f9bfa18efa9319f95be2fb401de5b37110a16fac0d6cafa4cb0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f\"" Jul 9 13:13:02.495705 containerd[1582]: time="2025-07-09T13:13:02.495648731Z" level=info msg="StartContainer for \"979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f\"" Jul 9 13:13:02.496932 containerd[1582]: time="2025-07-09T13:13:02.496896260Z" level=info msg="connecting to shim 979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f" address="unix:///run/containerd/s/a73a56b156cd55db2c2b4d3cf6871353b39f77bc442e51044016e30fbe5cd7b7" protocol=ttrpc version=3 Jul 9 13:13:02.522418 systemd[1]: Started cri-containerd-979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f.scope - libcontainer container 979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f. Jul 9 13:13:02.577940 containerd[1582]: time="2025-07-09T13:13:02.577885804Z" level=info msg="StartContainer for \"979d8ef90c56bc20ac349acf25f4e0d262a2652b93b1d71b70875fbe1d479a9f\" returns successfully" Jul 9 13:13:03.330882 kubelet[2740]: I0709 13:13:03.330791 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54d8dc767f-ngpsf" podStartSLOduration=32.497075242 podStartE2EDuration="41.330772068s" podCreationTimestamp="2025-07-09 13:12:22 +0000 UTC" firstStartedPulling="2025-07-09 13:12:53.639134479 +0000 UTC m=+54.680181822" lastFinishedPulling="2025-07-09 13:13:02.472831305 +0000 UTC m=+63.513878648" observedRunningTime="2025-07-09 13:13:03.32942681 +0000 UTC m=+64.370474153" watchObservedRunningTime="2025-07-09 13:13:03.330772068 +0000 UTC m=+64.371819411" Jul 9 13:13:05.735709 containerd[1582]: time="2025-07-09T13:13:05.735627121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:05.736454 containerd[1582]: time="2025-07-09T13:13:05.736406818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 9 13:13:05.737886 containerd[1582]: time="2025-07-09T13:13:05.737857125Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:05.740480 containerd[1582]: time="2025-07-09T13:13:05.740095164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:05.740480 containerd[1582]: time="2025-07-09T13:13:05.740362651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.267380894s" Jul 9 13:13:05.740480 containerd[1582]: time="2025-07-09T13:13:05.740390785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 9 13:13:05.741284 containerd[1582]: time="2025-07-09T13:13:05.741219367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 13:13:05.746140 containerd[1582]: time="2025-07-09T13:13:05.745634168Z" level=info msg="CreateContainer within sandbox \"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 13:13:05.755370 containerd[1582]: time="2025-07-09T13:13:05.755321466Z" level=info msg="Container cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:13:05.766193 containerd[1582]: time="2025-07-09T13:13:05.766129964Z" level=info msg="CreateContainer within sandbox \"56ff009f7ec6c05ed67f25b61dbe659c5f610c7def8a7734a67a1d8345484635\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6\"" Jul 9 13:13:05.766799 containerd[1582]: time="2025-07-09T13:13:05.766748339Z" level=info msg="StartContainer for \"cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6\"" Jul 9 13:13:05.768706 containerd[1582]: time="2025-07-09T13:13:05.768671570Z" level=info msg="connecting to shim cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6" address="unix:///run/containerd/s/f4a05ff28a20d157d2d41b7d1875dc9562ec878da375731f897d035d3fdcc769" protocol=ttrpc version=3 Jul 9 13:13:05.792430 systemd[1]: Started cri-containerd-cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6.scope - libcontainer container cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6. Jul 9 13:13:05.842697 containerd[1582]: time="2025-07-09T13:13:05.842650501Z" level=info msg="StartContainer for \"cb18ae25a165753ff59a4b1a08886d3d1170f6650d4b237ec4bbec8f0835d3b6\" returns successfully" Jul 9 13:13:06.150955 kubelet[2740]: I0709 13:13:06.150854 2740 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 13:13:06.152309 kubelet[2740]: I0709 13:13:06.152286 2740 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 13:13:06.338024 kubelet[2740]: I0709 13:13:06.337963 2740 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 13:13:06.339993 kubelet[2740]: I0709 13:13:06.339921 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hqgmx" podStartSLOduration=28.45498303 podStartE2EDuration="42.339908258s" podCreationTimestamp="2025-07-09 13:12:24 +0000 UTC" firstStartedPulling="2025-07-09 13:12:51.856095075 +0000 UTC m=+52.897142418" lastFinishedPulling="2025-07-09 13:13:05.741020313 +0000 UTC m=+66.782067646" observedRunningTime="2025-07-09 13:13:06.339709184 +0000 UTC m=+67.380756527" watchObservedRunningTime="2025-07-09 13:13:06.339908258 +0000 UTC m=+67.380955591" Jul 9 13:13:07.045743 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:55736.service - OpenSSH per-connection server daemon (10.0.0.1:55736). Jul 9 13:13:07.148058 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 55736 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:07.150435 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:07.158421 systemd-logind[1562]: New session 15 of user core. Jul 9 13:13:07.169510 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 13:13:07.408328 sshd[5352]: Connection closed by 10.0.0.1 port 55736 Jul 9 13:13:07.409398 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:07.413634 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:55736.service: Deactivated successfully. Jul 9 13:13:07.416509 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 13:13:07.417378 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Jul 9 13:13:07.419168 systemd-logind[1562]: Removed session 15. Jul 9 13:13:07.678551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280823101.mount: Deactivated successfully. Jul 9 13:13:08.057610 kubelet[2740]: E0709 13:13:08.057556 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:13:08.189017 containerd[1582]: time="2025-07-09T13:13:08.188948301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:08.190079 containerd[1582]: time="2025-07-09T13:13:08.190042273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 9 13:13:08.191598 containerd[1582]: time="2025-07-09T13:13:08.191543850Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:08.193847 containerd[1582]: time="2025-07-09T13:13:08.193804192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:08.194633 containerd[1582]: time="2025-07-09T13:13:08.194581691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.453301295s" Jul 9 13:13:08.194709 containerd[1582]: time="2025-07-09T13:13:08.194636738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 9 13:13:08.195771 containerd[1582]: time="2025-07-09T13:13:08.195749645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 13:13:08.200102 containerd[1582]: time="2025-07-09T13:13:08.200066064Z" level=info msg="CreateContainer within sandbox \"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 13:13:08.208582 containerd[1582]: time="2025-07-09T13:13:08.208540777Z" level=info msg="Container 12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:13:08.219570 containerd[1582]: time="2025-07-09T13:13:08.219511256Z" level=info msg="CreateContainer within sandbox \"d232387ae88b867c50d5567e2e2bb1f5c3450d81548adf083bc16707dc0db5aa\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a\"" Jul 9 13:13:08.220387 containerd[1582]: time="2025-07-09T13:13:08.220350325Z" level=info msg="StartContainer for \"12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a\"" Jul 9 13:13:08.221709 containerd[1582]: time="2025-07-09T13:13:08.221685892Z" level=info msg="connecting to shim 12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a" address="unix:///run/containerd/s/5375a03ab473c0f76bd6f5518e55e44fe2736dfdaf79e4792f8445490d82dcbf" protocol=ttrpc version=3 Jul 9 13:13:08.245497 systemd[1]: Started cri-containerd-12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a.scope - libcontainer container 12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a. Jul 9 13:13:08.304276 containerd[1582]: time="2025-07-09T13:13:08.303464042Z" level=info msg="StartContainer for \"12d56b707454b2c24b1bba4c53fb4d5850f969444ad5c93aeb28529ca719755a\" returns successfully" Jul 9 13:13:08.352203 kubelet[2740]: I0709 13:13:08.352020 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c56489cbb-shmtp" podStartSLOduration=2.086409054 podStartE2EDuration="18.352002441s" podCreationTimestamp="2025-07-09 13:12:50 +0000 UTC" firstStartedPulling="2025-07-09 13:12:51.930009375 +0000 UTC m=+52.971056718" lastFinishedPulling="2025-07-09 13:13:08.195602772 +0000 UTC m=+69.236650105" observedRunningTime="2025-07-09 13:13:08.351489251 +0000 UTC m=+69.392536584" watchObservedRunningTime="2025-07-09 13:13:08.352002441 +0000 UTC m=+69.393049774" Jul 9 13:13:09.058533 kubelet[2740]: E0709 13:13:09.058486 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:13:11.884863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699713477.mount: Deactivated successfully. Jul 9 13:13:12.426453 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:34108.service - OpenSSH per-connection server daemon (10.0.0.1:34108). Jul 9 13:13:12.507964 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 34108 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:12.510202 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:12.517514 systemd-logind[1562]: New session 16 of user core. Jul 9 13:13:12.524484 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 13:13:12.711590 sshd[5427]: Connection closed by 10.0.0.1 port 34108 Jul 9 13:13:12.713689 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:12.721590 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Jul 9 13:13:12.721944 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:34108.service: Deactivated successfully. Jul 9 13:13:12.724457 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 13:13:12.727379 systemd-logind[1562]: Removed session 16. Jul 9 13:13:13.093001 containerd[1582]: time="2025-07-09T13:13:13.092849369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:13.094017 containerd[1582]: time="2025-07-09T13:13:13.093960525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 9 13:13:13.096877 containerd[1582]: time="2025-07-09T13:13:13.096801177Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:13.099632 containerd[1582]: time="2025-07-09T13:13:13.099587624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 13:13:13.100441 containerd[1582]: time="2025-07-09T13:13:13.100397651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.904514659s" Jul 9 13:13:13.100441 containerd[1582]: time="2025-07-09T13:13:13.100431396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 9 13:13:13.270260 containerd[1582]: time="2025-07-09T13:13:13.270173149Z" level=info msg="CreateContainer within sandbox \"156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 13:13:13.853977 containerd[1582]: time="2025-07-09T13:13:13.853911750Z" level=info msg="Container 4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d: CDI devices from CRI Config.CDIDevices: []" Jul 9 13:13:14.588942 containerd[1582]: time="2025-07-09T13:13:14.588881312Z" level=info msg="CreateContainer within sandbox \"156d67891695bd34c204672c5f3126d93eaf2fa29f6b65221e6789a39d32a412\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d\"" Jul 9 13:13:14.589423 containerd[1582]: time="2025-07-09T13:13:14.589396862Z" level=info msg="StartContainer for \"4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d\"" Jul 9 13:13:14.590604 containerd[1582]: time="2025-07-09T13:13:14.590557372Z" level=info msg="connecting to shim 4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d" address="unix:///run/containerd/s/75737401c68a92a6288badb47353833fe82372a60b61de2e821f14aa962d291d" protocol=ttrpc version=3 Jul 9 13:13:14.625432 systemd[1]: Started cri-containerd-4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d.scope - libcontainer container 4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d. Jul 9 13:13:14.677801 containerd[1582]: time="2025-07-09T13:13:14.677722339Z" level=info msg="StartContainer for \"4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d\" returns successfully" Jul 9 13:13:15.384526 kubelet[2740]: I0709 13:13:15.384455 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-6bz26" podStartSLOduration=38.565245403 podStartE2EDuration="51.384437547s" podCreationTimestamp="2025-07-09 13:12:24 +0000 UTC" firstStartedPulling="2025-07-09 13:13:00.282352833 +0000 UTC m=+61.323400176" lastFinishedPulling="2025-07-09 13:13:13.101544977 +0000 UTC m=+74.142592320" observedRunningTime="2025-07-09 13:13:15.382608085 +0000 UTC m=+76.423655428" watchObservedRunningTime="2025-07-09 13:13:15.384437547 +0000 UTC m=+76.425484880" Jul 9 13:13:15.454990 containerd[1582]: time="2025-07-09T13:13:15.454943319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d\" id:\"bdf7591faa0b431fa27631d9b4641d01af13eee5fd9e59fac3b5a36e474150de\" pid:5491 exit_status:1 exited_at:{seconds:1752066795 nanos:454483446}" Jul 9 13:13:16.469374 containerd[1582]: time="2025-07-09T13:13:16.469315805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f885001531a5573a9a72e3a326b5c023bd84bcea168280192b60b6df865aa3d\" id:\"3daaff63bc478cb52fbdb55d67378f75e3db5a13d247c75e9b3a169c47fa911b\" pid:5516 exited_at:{seconds:1752066796 nanos:468643714}" Jul 9 13:13:17.724136 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:34116.service - OpenSSH per-connection server daemon (10.0.0.1:34116). Jul 9 13:13:17.823404 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 34116 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:17.825002 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:17.829277 systemd-logind[1562]: New session 17 of user core. Jul 9 13:13:17.844372 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 13:13:17.973430 sshd[5534]: Connection closed by 10.0.0.1 port 34116 Jul 9 13:13:17.973804 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:17.993472 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:34116.service: Deactivated successfully. Jul 9 13:13:17.995564 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 13:13:17.996508 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Jul 9 13:13:18.000398 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Jul 9 13:13:18.001091 systemd-logind[1562]: Removed session 17. Jul 9 13:13:18.050590 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:18.052220 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:18.056860 systemd-logind[1562]: New session 18 of user core. Jul 9 13:13:18.057999 kubelet[2740]: E0709 13:13:18.057902 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:13:18.062412 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 13:13:18.235801 containerd[1582]: time="2025-07-09T13:13:18.235705527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\" id:\"ab817d7c3f09ab9d966fff3339c90e5b96c7c2e5da2be16c17c3d232b60f53cf\" pid:5568 exited_at:{seconds:1752066798 nanos:235426041}" Jul 9 13:13:18.464018 sshd[5550]: Connection closed by 10.0.0.1 port 34128 Jul 9 13:13:18.464478 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:18.475536 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:34128.service: Deactivated successfully. Jul 9 13:13:18.478428 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 13:13:18.479465 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Jul 9 13:13:18.484162 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:60220.service - OpenSSH per-connection server daemon (10.0.0.1:60220). Jul 9 13:13:18.484902 systemd-logind[1562]: Removed session 18. Jul 9 13:13:18.542428 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 60220 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:18.543794 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:18.548118 systemd-logind[1562]: New session 19 of user core. Jul 9 13:13:18.564353 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 13:13:19.600174 sshd[5587]: Connection closed by 10.0.0.1 port 60220 Jul 9 13:13:19.599865 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:19.614681 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:60220.service: Deactivated successfully. Jul 9 13:13:19.621335 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 13:13:19.623270 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Jul 9 13:13:19.629518 systemd-logind[1562]: Removed session 19. Jul 9 13:13:19.632008 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:60236.service - OpenSSH per-connection server daemon (10.0.0.1:60236). Jul 9 13:13:19.680173 sshd[5605]: Accepted publickey for core from 10.0.0.1 port 60236 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:19.681438 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:19.685670 systemd-logind[1562]: New session 20 of user core. Jul 9 13:13:19.694372 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 13:13:20.213636 sshd[5609]: Connection closed by 10.0.0.1 port 60236 Jul 9 13:13:20.214044 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:20.224659 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:60236.service: Deactivated successfully. Jul 9 13:13:20.228092 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 13:13:20.229066 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Jul 9 13:13:20.232998 systemd-logind[1562]: Removed session 20. Jul 9 13:13:20.235038 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:60244.service - OpenSSH per-connection server daemon (10.0.0.1:60244). Jul 9 13:13:20.292417 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:20.293804 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:20.298584 systemd-logind[1562]: New session 21 of user core. Jul 9 13:13:20.313399 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 13:13:20.419813 sshd[5626]: Connection closed by 10.0.0.1 port 60244 Jul 9 13:13:20.420149 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:20.424169 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:60244.service: Deactivated successfully. Jul 9 13:13:20.426110 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 13:13:20.426797 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Jul 9 13:13:20.427944 systemd-logind[1562]: Removed session 21. Jul 9 13:13:21.385109 containerd[1582]: time="2025-07-09T13:13:21.385046758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c60fbab2e6ac953b25430243167fdacf82e3508d839c83d833531eb43285c90e\" id:\"0e6212c826c1a0db5419d073c7a97d3753ff83cf92f1040d489031ca7dc098f7\" pid:5651 exited_at:{seconds:1752066801 nanos:384571789}" Jul 9 13:13:25.435669 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:60254.service - OpenSSH per-connection server daemon (10.0.0.1:60254). Jul 9 13:13:25.503291 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 60254 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:25.505484 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:25.511320 systemd-logind[1562]: New session 22 of user core. Jul 9 13:13:25.519473 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 13:13:25.682448 sshd[5673]: Connection closed by 10.0.0.1 port 60254 Jul 9 13:13:25.682805 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:25.687627 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Jul 9 13:13:25.688178 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:60254.service: Deactivated successfully. Jul 9 13:13:25.691796 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 13:13:25.697020 systemd-logind[1562]: Removed session 22. Jul 9 13:13:30.702124 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). Jul 9 13:13:30.820421 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:30.822450 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:30.827099 systemd-logind[1562]: New session 23 of user core. Jul 9 13:13:30.836369 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 13:13:30.992227 sshd[5691]: Connection closed by 10.0.0.1 port 55032 Jul 9 13:13:30.992529 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:30.997820 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:55032.service: Deactivated successfully. Jul 9 13:13:31.000024 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 13:13:31.001078 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Jul 9 13:13:31.002370 systemd-logind[1562]: Removed session 23. Jul 9 13:13:32.355993 containerd[1582]: time="2025-07-09T13:13:32.355930950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2ec18cedbac631fa200db9637750537f2c1a7a66ecb0692955788e8a9559384\" id:\"c91b087c3edf422aed83b836ae21dd950e198fff84f7dea9f244f7eb58a27370\" pid:5721 exited_at:{seconds:1752066812 nanos:355703026}" Jul 9 13:13:34.058272 kubelet[2740]: E0709 13:13:34.058216 2740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 13:13:36.006394 systemd[1]: Started sshd@23-10.0.0.120:22-10.0.0.1:55040.service - OpenSSH per-connection server daemon (10.0.0.1:55040). Jul 9 13:13:36.085700 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 55040 ssh2: RSA SHA256:HAtV86zLPP9RIVggm/FTobjBgXFwFP5d+SRakJDDvWM Jul 9 13:13:36.089575 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 13:13:36.102798 systemd-logind[1562]: New session 24 of user core. Jul 9 13:13:36.107447 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 13:13:36.347372 sshd[5736]: Connection closed by 10.0.0.1 port 55040 Jul 9 13:13:36.348556 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Jul 9 13:13:36.355823 systemd[1]: sshd@23-10.0.0.120:22-10.0.0.1:55040.service: Deactivated successfully. Jul 9 13:13:36.356383 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Jul 9 13:13:36.361282 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 13:13:36.363611 systemd-logind[1562]: Removed session 24.