Jul 1 08:43:39.837269 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jun 30 19:26:54 -00 2025 Jul 1 08:43:39.837296 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:43:39.837311 kernel: BIOS-provided physical RAM map: Jul 1 08:43:39.837319 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 1 08:43:39.837328 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 1 08:43:39.837336 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 1 08:43:39.837346 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 1 08:43:39.837355 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 1 08:43:39.837371 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 1 08:43:39.837380 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 1 08:43:39.837389 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 1 08:43:39.837397 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 1 08:43:39.837406 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 1 08:43:39.837414 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 1 08:43:39.837428 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 1 08:43:39.837438 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 1 08:43:39.837447 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 1 08:43:39.837456 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 1 08:43:39.837465 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 1 08:43:39.837475 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 1 08:43:39.837484 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 1 08:43:39.837493 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 1 08:43:39.837502 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 1 08:43:39.837511 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:43:39.837520 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 1 08:43:39.837825 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:43:39.837837 kernel: NX (Execute Disable) protection: active Jul 1 08:43:39.837848 kernel: APIC: Static calls initialized Jul 1 08:43:39.837859 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 1 08:43:39.837868 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 1 08:43:39.837877 kernel: extended physical RAM map: Jul 1 08:43:39.837886 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 1 08:43:39.837895 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 1 08:43:39.837904 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 1 08:43:39.837913 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 1 08:43:39.837922 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 1 08:43:39.837935 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 1 08:43:39.837944 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 1 08:43:39.837953 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 1 08:43:39.837962 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 1 08:43:39.837975 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 1 08:43:39.837984 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 1 08:43:39.837996 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 1 08:43:39.838005 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 1 08:43:39.838015 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 1 08:43:39.838024 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 1 08:43:39.838033 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 1 08:43:39.838042 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 1 08:43:39.838052 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 1 08:43:39.838061 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 1 08:43:39.838070 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 1 08:43:39.838079 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 1 08:43:39.838091 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 1 08:43:39.838100 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 1 08:43:39.838109 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 1 08:43:39.838118 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:43:39.838127 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 1 08:43:39.838135 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:43:39.838146 kernel: efi: EFI v2.7 by EDK II Jul 1 08:43:39.838154 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 1 08:43:39.838161 kernel: random: crng init done Jul 1 08:43:39.838168 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 1 08:43:39.838175 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 1 08:43:39.838185 kernel: secureboot: Secure boot disabled Jul 1 08:43:39.838192 kernel: SMBIOS 2.8 present. Jul 1 08:43:39.838199 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 1 08:43:39.838206 kernel: DMI: Memory slots populated: 1/1 Jul 1 08:43:39.838213 kernel: Hypervisor detected: KVM Jul 1 08:43:39.838220 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 1 08:43:39.838227 kernel: kvm-clock: using sched offset of 4265193494 cycles Jul 1 08:43:39.838235 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 1 08:43:39.838242 kernel: tsc: Detected 2794.750 MHz processor Jul 1 08:43:39.838250 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 1 08:43:39.838257 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 1 08:43:39.838267 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 1 08:43:39.838274 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 1 08:43:39.838281 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 1 08:43:39.838289 kernel: Using GB pages for direct mapping Jul 1 08:43:39.838296 kernel: ACPI: Early table checksum verification disabled Jul 1 08:43:39.838303 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 1 08:43:39.838311 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 1 08:43:39.838318 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838326 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838335 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 1 08:43:39.838342 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838350 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838357 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838365 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:43:39.838372 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 1 08:43:39.838379 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 1 08:43:39.838387 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 1 08:43:39.838396 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 1 08:43:39.838403 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 1 08:43:39.838411 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 1 08:43:39.838418 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 1 08:43:39.838425 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 1 08:43:39.838432 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 1 08:43:39.838440 kernel: No NUMA configuration found Jul 1 08:43:39.838447 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 1 08:43:39.838454 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 1 08:43:39.838462 kernel: Zone ranges: Jul 1 08:43:39.838472 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 1 08:43:39.838479 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 1 08:43:39.838486 kernel: Normal empty Jul 1 08:43:39.838493 kernel: Device empty Jul 1 08:43:39.838501 kernel: Movable zone start for each node Jul 1 08:43:39.838508 kernel: Early memory node ranges Jul 1 08:43:39.838515 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 1 08:43:39.838522 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 1 08:43:39.838529 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 1 08:43:39.838539 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 1 08:43:39.838546 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 1 08:43:39.838553 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 1 08:43:39.838560 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 1 08:43:39.838581 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 1 08:43:39.838599 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 1 08:43:39.838610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:43:39.838623 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 1 08:43:39.838643 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 1 08:43:39.838658 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:43:39.838668 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 1 08:43:39.838678 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 1 08:43:39.838688 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 1 08:43:39.838711 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 1 08:43:39.838721 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 1 08:43:39.838731 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 1 08:43:39.838741 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 1 08:43:39.838753 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 1 08:43:39.838776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 1 08:43:39.838787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 1 08:43:39.838797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 1 08:43:39.838807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 1 08:43:39.838817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 1 08:43:39.838827 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 1 08:43:39.838837 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 1 08:43:39.838846 kernel: TSC deadline timer available Jul 1 08:43:39.838859 kernel: CPU topo: Max. logical packages: 1 Jul 1 08:43:39.838869 kernel: CPU topo: Max. logical dies: 1 Jul 1 08:43:39.838879 kernel: CPU topo: Max. dies per package: 1 Jul 1 08:43:39.838889 kernel: CPU topo: Max. threads per core: 1 Jul 1 08:43:39.838899 kernel: CPU topo: Num. cores per package: 4 Jul 1 08:43:39.838908 kernel: CPU topo: Num. threads per package: 4 Jul 1 08:43:39.838918 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 1 08:43:39.838928 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 1 08:43:39.838938 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 1 08:43:39.838947 kernel: kvm-guest: setup PV sched yield Jul 1 08:43:39.838960 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 1 08:43:39.838970 kernel: Booting paravirtualized kernel on KVM Jul 1 08:43:39.838980 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 1 08:43:39.838990 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 1 08:43:39.839000 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 1 08:43:39.839008 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 1 08:43:39.839015 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 1 08:43:39.839022 kernel: kvm-guest: PV spinlocks enabled Jul 1 08:43:39.839030 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 1 08:43:39.839041 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:43:39.839050 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 08:43:39.839057 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 08:43:39.839065 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 08:43:39.839072 kernel: Fallback order for Node 0: 0 Jul 1 08:43:39.839080 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 1 08:43:39.839087 kernel: Policy zone: DMA32 Jul 1 08:43:39.839095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 08:43:39.839104 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 1 08:43:39.839112 kernel: ftrace: allocating 40095 entries in 157 pages Jul 1 08:43:39.839119 kernel: ftrace: allocated 157 pages with 5 groups Jul 1 08:43:39.839127 kernel: Dynamic Preempt: voluntary Jul 1 08:43:39.839134 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 08:43:39.839143 kernel: rcu: RCU event tracing is enabled. Jul 1 08:43:39.839151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 1 08:43:39.839158 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 08:43:39.839166 kernel: Rude variant of Tasks RCU enabled. Jul 1 08:43:39.839176 kernel: Tracing variant of Tasks RCU enabled. Jul 1 08:43:39.839184 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 08:43:39.839195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 1 08:43:39.839203 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:43:39.839210 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:43:39.839218 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:43:39.839226 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 1 08:43:39.839233 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 08:43:39.839241 kernel: Console: colour dummy device 80x25 Jul 1 08:43:39.839250 kernel: printk: legacy console [ttyS0] enabled Jul 1 08:43:39.839258 kernel: ACPI: Core revision 20240827 Jul 1 08:43:39.839265 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 1 08:43:39.839273 kernel: APIC: Switch to symmetric I/O mode setup Jul 1 08:43:39.839281 kernel: x2apic enabled Jul 1 08:43:39.839288 kernel: APIC: Switched APIC routing to: physical x2apic Jul 1 08:43:39.839296 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 1 08:43:39.839303 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 1 08:43:39.839311 kernel: kvm-guest: setup PV IPIs Jul 1 08:43:39.839320 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 1 08:43:39.839328 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:43:39.839336 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 1 08:43:39.839343 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 1 08:43:39.839351 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 1 08:43:39.839358 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 1 08:43:39.839366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 1 08:43:39.839374 kernel: Spectre V2 : Mitigation: Retpolines Jul 1 08:43:39.839381 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 1 08:43:39.839391 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 1 08:43:39.839398 kernel: RETBleed: Mitigation: untrained return thunk Jul 1 08:43:39.839406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 1 08:43:39.839414 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 1 08:43:39.839421 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 1 08:43:39.839429 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 1 08:43:39.839437 kernel: x86/bugs: return thunk changed Jul 1 08:43:39.839444 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 1 08:43:39.839454 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 1 08:43:39.839461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 1 08:43:39.839469 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 1 08:43:39.839476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 1 08:43:39.839484 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 1 08:43:39.839492 kernel: Freeing SMP alternatives memory: 32K Jul 1 08:43:39.839499 kernel: pid_max: default: 32768 minimum: 301 Jul 1 08:43:39.839507 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 1 08:43:39.839514 kernel: landlock: Up and running. Jul 1 08:43:39.839524 kernel: SELinux: Initializing. Jul 1 08:43:39.839532 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:43:39.839539 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:43:39.839547 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 1 08:43:39.839554 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 1 08:43:39.839562 kernel: ... version: 0 Jul 1 08:43:39.839569 kernel: ... bit width: 48 Jul 1 08:43:39.839577 kernel: ... generic registers: 6 Jul 1 08:43:39.839584 kernel: ... value mask: 0000ffffffffffff Jul 1 08:43:39.839594 kernel: ... max period: 00007fffffffffff Jul 1 08:43:39.839601 kernel: ... fixed-purpose events: 0 Jul 1 08:43:39.839609 kernel: ... event mask: 000000000000003f Jul 1 08:43:39.839616 kernel: signal: max sigframe size: 1776 Jul 1 08:43:39.839623 kernel: rcu: Hierarchical SRCU implementation. Jul 1 08:43:39.839631 kernel: rcu: Max phase no-delay instances is 400. Jul 1 08:43:39.839642 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 1 08:43:39.839649 kernel: smp: Bringing up secondary CPUs ... Jul 1 08:43:39.839657 kernel: smpboot: x86: Booting SMP configuration: Jul 1 08:43:39.839668 kernel: .... node #0, CPUs: #1 #2 #3 Jul 1 08:43:39.839678 kernel: smp: Brought up 1 node, 4 CPUs Jul 1 08:43:39.839688 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 1 08:43:39.839708 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54508K init, 2460K bss, 137196K reserved, 0K cma-reserved) Jul 1 08:43:39.839719 kernel: devtmpfs: initialized Jul 1 08:43:39.839729 kernel: x86/mm: Memory block size: 128MB Jul 1 08:43:39.839739 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 1 08:43:39.839749 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 1 08:43:39.839758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 1 08:43:39.839802 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 1 08:43:39.839815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 1 08:43:39.839827 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 1 08:43:39.839840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 08:43:39.839852 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 1 08:43:39.839865 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 08:43:39.839877 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 08:43:39.839889 kernel: audit: initializing netlink subsys (disabled) Jul 1 08:43:39.839901 kernel: audit: type=2000 audit(1751359418.243:1): state=initialized audit_enabled=0 res=1 Jul 1 08:43:39.839916 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 08:43:39.839928 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 1 08:43:39.839941 kernel: cpuidle: using governor menu Jul 1 08:43:39.839953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 08:43:39.839965 kernel: dca service started, version 1.12.1 Jul 1 08:43:39.839977 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 1 08:43:39.839989 kernel: PCI: Using configuration type 1 for base access Jul 1 08:43:39.840002 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 1 08:43:39.840014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 08:43:39.840026 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 08:43:39.840035 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 08:43:39.840045 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 08:43:39.840054 kernel: ACPI: Added _OSI(Module Device) Jul 1 08:43:39.840064 kernel: ACPI: Added _OSI(Processor Device) Jul 1 08:43:39.840074 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 08:43:39.840084 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 08:43:39.840094 kernel: ACPI: Interpreter enabled Jul 1 08:43:39.840104 kernel: ACPI: PM: (supports S0 S3 S5) Jul 1 08:43:39.840116 kernel: ACPI: Using IOAPIC for interrupt routing Jul 1 08:43:39.840127 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 1 08:43:39.840137 kernel: PCI: Using E820 reservations for host bridge windows Jul 1 08:43:39.840147 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 1 08:43:39.840158 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 1 08:43:39.840401 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 1 08:43:39.840558 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 1 08:43:39.840724 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 1 08:43:39.840741 kernel: PCI host bridge to bus 0000:00 Jul 1 08:43:39.840983 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 1 08:43:39.841123 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 1 08:43:39.841260 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 1 08:43:39.841395 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 1 08:43:39.841530 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 1 08:43:39.841669 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 1 08:43:39.841829 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 1 08:43:39.842006 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 1 08:43:39.842167 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 1 08:43:39.842308 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 1 08:43:39.842453 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 1 08:43:39.842595 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 1 08:43:39.842755 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 1 08:43:39.842939 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 1 08:43:39.843082 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 1 08:43:39.843239 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 1 08:43:39.843377 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 1 08:43:39.843533 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 1 08:43:39.843677 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 1 08:43:39.843856 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 1 08:43:39.844029 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 1 08:43:39.844192 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 1 08:43:39.844340 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 1 08:43:39.844485 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 1 08:43:39.844631 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 1 08:43:39.844830 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 1 08:43:39.845006 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 1 08:43:39.845165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 1 08:43:39.845335 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 1 08:43:39.845484 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 1 08:43:39.845636 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 1 08:43:39.845855 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 1 08:43:39.846013 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 1 08:43:39.846030 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 1 08:43:39.846041 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 1 08:43:39.846052 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 1 08:43:39.846063 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 1 08:43:39.846073 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 1 08:43:39.846084 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 1 08:43:39.846095 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 1 08:43:39.846110 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 1 08:43:39.846121 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 1 08:43:39.846132 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 1 08:43:39.846143 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 1 08:43:39.846153 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 1 08:43:39.846164 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 1 08:43:39.846175 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 1 08:43:39.846186 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 1 08:43:39.846196 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 1 08:43:39.846210 kernel: iommu: Default domain type: Translated Jul 1 08:43:39.846221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 1 08:43:39.846232 kernel: efivars: Registered efivars operations Jul 1 08:43:39.846242 kernel: PCI: Using ACPI for IRQ routing Jul 1 08:43:39.846252 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 1 08:43:39.846262 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 1 08:43:39.846272 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 1 08:43:39.846282 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 1 08:43:39.846292 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 1 08:43:39.846305 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 1 08:43:39.846315 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 1 08:43:39.846326 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 1 08:43:39.846336 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 1 08:43:39.846487 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 1 08:43:39.846635 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 1 08:43:39.846811 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 1 08:43:39.846827 kernel: vgaarb: loaded Jul 1 08:43:39.846842 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 1 08:43:39.846852 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 1 08:43:39.846863 kernel: clocksource: Switched to clocksource kvm-clock Jul 1 08:43:39.846873 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 08:43:39.846884 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 08:43:39.846894 kernel: pnp: PnP ACPI init Jul 1 08:43:39.847079 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 1 08:43:39.847114 kernel: pnp: PnP ACPI: found 6 devices Jul 1 08:43:39.847131 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 1 08:43:39.847141 kernel: NET: Registered PF_INET protocol family Jul 1 08:43:39.847152 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 08:43:39.847163 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 08:43:39.847174 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 08:43:39.847184 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 08:43:39.847195 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 08:43:39.847205 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 08:43:39.847215 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:43:39.847229 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:43:39.847240 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 08:43:39.847250 kernel: NET: Registered PF_XDP protocol family Jul 1 08:43:39.847396 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 1 08:43:39.847535 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 1 08:43:39.847666 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 1 08:43:39.847859 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 1 08:43:39.848006 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 1 08:43:39.848127 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 1 08:43:39.848248 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 1 08:43:39.848354 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 1 08:43:39.848364 kernel: PCI: CLS 0 bytes, default 64 Jul 1 08:43:39.848372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:43:39.848381 kernel: Initialise system trusted keyrings Jul 1 08:43:39.848389 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 08:43:39.848396 kernel: Key type asymmetric registered Jul 1 08:43:39.848408 kernel: Asymmetric key parser 'x509' registered Jul 1 08:43:39.848416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 08:43:39.848424 kernel: io scheduler mq-deadline registered Jul 1 08:43:39.848434 kernel: io scheduler kyber registered Jul 1 08:43:39.848442 kernel: io scheduler bfq registered Jul 1 08:43:39.848451 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 1 08:43:39.848461 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 1 08:43:39.848469 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 1 08:43:39.848477 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 1 08:43:39.848485 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 08:43:39.848494 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:43:39.848502 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 1 08:43:39.848510 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 1 08:43:39.848518 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 1 08:43:39.848641 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 1 08:43:39.848807 kernel: rtc_cmos 00:04: registered as rtc0 Jul 1 08:43:39.848944 kernel: rtc_cmos 00:04: setting system clock to 2025-07-01T08:43:39 UTC (1751359419) Jul 1 08:43:39.849051 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 1 08:43:39.849061 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 1 08:43:39.849070 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 1 08:43:39.849078 kernel: efifb: probing for efifb Jul 1 08:43:39.849086 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 1 08:43:39.849094 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 1 08:43:39.849105 kernel: efifb: scrolling: redraw Jul 1 08:43:39.849113 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 1 08:43:39.849121 kernel: Console: switching to colour frame buffer device 160x50 Jul 1 08:43:39.849129 kernel: fb0: EFI VGA frame buffer device Jul 1 08:43:39.849137 kernel: pstore: Using crash dump compression: deflate Jul 1 08:43:39.849145 kernel: pstore: Registered efi_pstore as persistent store backend Jul 1 08:43:39.849153 kernel: NET: Registered PF_INET6 protocol family Jul 1 08:43:39.849161 kernel: Segment Routing with IPv6 Jul 1 08:43:39.849169 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 08:43:39.849179 kernel: NET: Registered PF_PACKET protocol family Jul 1 08:43:39.849187 kernel: Key type dns_resolver registered Jul 1 08:43:39.849194 kernel: IPI shorthand broadcast: enabled Jul 1 08:43:39.849202 kernel: sched_clock: Marking stable (3471002930, 158653084)->(3713085234, -83429220) Jul 1 08:43:39.849210 kernel: registered taskstats version 1 Jul 1 08:43:39.849218 kernel: Loading compiled-in X.509 certificates Jul 1 08:43:39.849227 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: bdab85da21e6e40e781d68d3bf17f0a40ee7357c' Jul 1 08:43:39.849234 kernel: Demotion targets for Node 0: null Jul 1 08:43:39.849242 kernel: Key type .fscrypt registered Jul 1 08:43:39.849252 kernel: Key type fscrypt-provisioning registered Jul 1 08:43:39.849260 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 08:43:39.849268 kernel: ima: Allocated hash algorithm: sha1 Jul 1 08:43:39.849276 kernel: ima: No architecture policies found Jul 1 08:43:39.849283 kernel: clk: Disabling unused clocks Jul 1 08:43:39.849291 kernel: Warning: unable to open an initial console. Jul 1 08:43:39.849299 kernel: Freeing unused kernel image (initmem) memory: 54508K Jul 1 08:43:39.849307 kernel: Write protecting the kernel read-only data: 24576k Jul 1 08:43:39.849315 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 1 08:43:39.849325 kernel: Run /init as init process Jul 1 08:43:39.849333 kernel: with arguments: Jul 1 08:43:39.849340 kernel: /init Jul 1 08:43:39.849348 kernel: with environment: Jul 1 08:43:39.849356 kernel: HOME=/ Jul 1 08:43:39.849363 kernel: TERM=linux Jul 1 08:43:39.849371 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 08:43:39.849380 systemd[1]: Successfully made /usr/ read-only. Jul 1 08:43:39.849391 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:43:39.849403 systemd[1]: Detected virtualization kvm. Jul 1 08:43:39.849411 systemd[1]: Detected architecture x86-64. Jul 1 08:43:39.849419 systemd[1]: Running in initrd. Jul 1 08:43:39.849427 systemd[1]: No hostname configured, using default hostname. Jul 1 08:43:39.849435 systemd[1]: Hostname set to . Jul 1 08:43:39.849443 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:43:39.849452 systemd[1]: Queued start job for default target initrd.target. Jul 1 08:43:39.849462 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:43:39.849470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:43:39.849479 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 08:43:39.849488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:43:39.849496 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 08:43:39.849506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 08:43:39.849516 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 08:43:39.849526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 08:43:39.849535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:43:39.849546 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:43:39.849554 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:43:39.849562 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:43:39.849571 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:43:39.849579 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:43:39.849587 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:43:39.849598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:43:39.849606 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 08:43:39.849615 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 1 08:43:39.849623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:43:39.849631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:43:39.849640 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:43:39.849648 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:43:39.849656 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 08:43:39.849665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:43:39.849675 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 08:43:39.849684 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 1 08:43:39.849701 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 08:43:39.849709 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:43:39.849718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:43:39.849726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:39.849735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 08:43:39.849746 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:43:39.849755 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 08:43:39.849784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:43:39.849821 systemd-journald[219]: Collecting audit messages is disabled. Jul 1 08:43:39.849847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:39.849856 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:43:39.849865 systemd-journald[219]: Journal started Jul 1 08:43:39.849886 systemd-journald[219]: Runtime Journal (/run/log/journal/4470415eff2046b0a5171c9e442c2ec9) is 6M, max 48.5M, 42.4M free. Jul 1 08:43:39.836027 systemd-modules-load[221]: Inserted module 'overlay' Jul 1 08:43:39.853290 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:43:39.856559 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 08:43:39.860990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:43:39.865794 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 08:43:39.868227 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 1 08:43:39.869161 kernel: Bridge firewalling registered Jul 1 08:43:39.874525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:43:39.874963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:43:39.877037 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:43:39.887984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:43:39.889716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:43:39.890421 systemd-tmpfiles[242]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 1 08:43:39.896398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:43:39.898228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:43:39.903460 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:43:39.915095 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 08:43:39.936451 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:43:39.951111 systemd-resolved[260]: Positive Trust Anchors: Jul 1 08:43:39.951129 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:43:39.951163 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:43:39.953715 systemd-resolved[260]: Defaulting to hostname 'linux'. Jul 1 08:43:39.954877 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:43:39.960865 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:43:40.057812 kernel: SCSI subsystem initialized Jul 1 08:43:40.067803 kernel: Loading iSCSI transport class v2.0-870. Jul 1 08:43:40.078811 kernel: iscsi: registered transport (tcp) Jul 1 08:43:40.100815 kernel: iscsi: registered transport (qla4xxx) Jul 1 08:43:40.100905 kernel: QLogic iSCSI HBA Driver Jul 1 08:43:40.123473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:43:40.149072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:43:40.150419 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:43:40.204229 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 08:43:40.206295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 08:43:40.270805 kernel: raid6: avx2x4 gen() 29750 MB/s Jul 1 08:43:40.287805 kernel: raid6: avx2x2 gen() 30591 MB/s Jul 1 08:43:40.304851 kernel: raid6: avx2x1 gen() 25344 MB/s Jul 1 08:43:40.304875 kernel: raid6: using algorithm avx2x2 gen() 30591 MB/s Jul 1 08:43:40.322862 kernel: raid6: .... xor() 19327 MB/s, rmw enabled Jul 1 08:43:40.322919 kernel: raid6: using avx2x2 recovery algorithm Jul 1 08:43:40.356792 kernel: xor: automatically using best checksumming function avx Jul 1 08:43:40.527795 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 08:43:40.535405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:43:40.538115 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:43:40.579326 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 1 08:43:40.584811 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:43:40.587944 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 08:43:40.619729 dracut-pre-trigger[481]: rd.md=0: removing MD RAID activation Jul 1 08:43:40.649213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:43:40.650883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:43:40.725539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:43:40.730252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 08:43:40.763797 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 1 08:43:40.766340 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 1 08:43:40.769838 kernel: cryptd: max_cpu_qlen set to 1000 Jul 1 08:43:40.776429 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 08:43:40.776489 kernel: GPT:9289727 != 19775487 Jul 1 08:43:40.776519 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 08:43:40.776543 kernel: GPT:9289727 != 19775487 Jul 1 08:43:40.776574 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 08:43:40.776606 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:43:40.798791 kernel: AES CTR mode by8 optimization enabled Jul 1 08:43:40.802541 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 1 08:43:40.807337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:43:40.808799 kernel: libata version 3.00 loaded. Jul 1 08:43:40.808175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:40.816578 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:40.819531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:40.838890 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:43:40.841794 kernel: ahci 0000:00:1f.2: version 3.0 Jul 1 08:43:40.848591 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 1 08:43:40.848629 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 1 08:43:40.852713 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 1 08:43:40.852970 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 1 08:43:40.854841 kernel: scsi host0: ahci Jul 1 08:43:40.855794 kernel: scsi host1: ahci Jul 1 08:43:40.856787 kernel: scsi host2: ahci Jul 1 08:43:40.857781 kernel: scsi host3: ahci Jul 1 08:43:40.857982 kernel: scsi host4: ahci Jul 1 08:43:40.858810 kernel: scsi host5: ahci Jul 1 08:43:40.859008 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 1 08:43:40.859517 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 1 08:43:40.864184 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 1 08:43:40.864198 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 1 08:43:40.864208 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 1 08:43:40.864218 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 1 08:43:40.864228 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 1 08:43:40.880423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:43:40.898440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 1 08:43:40.925477 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 1 08:43:40.925555 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 1 08:43:40.936261 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 08:43:40.950046 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:43:40.950113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:40.953399 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:40.967266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:40.968701 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:43:40.989669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:41.142054 disk-uuid[637]: Primary Header is updated. Jul 1 08:43:41.142054 disk-uuid[637]: Secondary Entries is updated. Jul 1 08:43:41.142054 disk-uuid[637]: Secondary Header is updated. Jul 1 08:43:41.146830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:43:41.151816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:43:41.176393 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 1 08:43:41.176473 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 1 08:43:41.176488 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 1 08:43:41.176512 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 1 08:43:41.177798 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 1 08:43:41.178788 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 1 08:43:41.180798 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 1 08:43:41.180825 kernel: ata3.00: applying bridge limits Jul 1 08:43:41.181785 kernel: ata3.00: configured for UDMA/100 Jul 1 08:43:41.184832 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 1 08:43:41.223804 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 1 08:43:41.224043 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 1 08:43:41.240780 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 1 08:43:41.621530 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 08:43:41.624151 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:43:41.626833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:43:41.629659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:43:41.633157 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 08:43:41.662354 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:43:42.166956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:43:42.167328 disk-uuid[643]: The operation has completed successfully. Jul 1 08:43:42.201734 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 08:43:42.201881 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 08:43:42.231552 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 08:43:42.267105 sh[672]: Success Jul 1 08:43:42.288892 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 08:43:42.288931 kernel: device-mapper: uevent: version 1.0.3 Jul 1 08:43:42.288951 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 1 08:43:42.297813 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 1 08:43:42.329361 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 08:43:42.331209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 08:43:42.350031 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 08:43:42.355980 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 1 08:43:42.356007 kernel: BTRFS: device fsid aeab36fb-d8a9-440c-a872-a8cce0218739 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (684) Jul 1 08:43:42.357263 kernel: BTRFS info (device dm-0): first mount of filesystem aeab36fb-d8a9-440c-a872-a8cce0218739 Jul 1 08:43:42.358098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:43:42.358111 kernel: BTRFS info (device dm-0): using free-space-tree Jul 1 08:43:42.362733 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 08:43:42.363181 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:43:42.365463 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 08:43:42.368387 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 08:43:42.369097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 08:43:42.398733 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Jul 1 08:43:42.398870 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:43:42.398900 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:43:42.399755 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:43:42.408791 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:43:42.410133 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 08:43:42.413162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 08:43:42.504434 ignition[764]: Ignition 2.21.0 Jul 1 08:43:42.504456 ignition[764]: Stage: fetch-offline Jul 1 08:43:42.504503 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:42.504517 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:42.504652 ignition[764]: parsed url from cmdline: "" Jul 1 08:43:42.509641 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:43:42.504658 ignition[764]: no config URL provided Jul 1 08:43:42.504665 ignition[764]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:43:42.504678 ignition[764]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:43:42.514977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:43:42.504706 ignition[764]: op(1): [started] loading QEMU firmware config module Jul 1 08:43:42.504713 ignition[764]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 1 08:43:42.517793 ignition[764]: op(1): [finished] loading QEMU firmware config module Jul 1 08:43:42.558718 ignition[764]: parsing config with SHA512: b026d9f442d1338e88318e3884fb05f2db71722c38360f958cfdab678fd2a98be74724f9f6332747ca5bf87fb369a67094858ef788cc643e41b93b0d93e98e78 Jul 1 08:43:42.560756 systemd-networkd[861]: lo: Link UP Jul 1 08:43:42.560813 systemd-networkd[861]: lo: Gained carrier Jul 1 08:43:42.562366 systemd-networkd[861]: Enumeration completed Jul 1 08:43:42.562496 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:43:42.562709 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:43:42.562714 systemd-networkd[861]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:43:42.563999 systemd-networkd[861]: eth0: Link UP Jul 1 08:43:42.564004 systemd-networkd[861]: eth0: Gained carrier Jul 1 08:43:42.564026 systemd-networkd[861]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:43:42.578301 systemd[1]: Reached target network.target - Network. Jul 1 08:43:42.588708 ignition[764]: fetch-offline: fetch-offline passed Jul 1 08:43:42.588335 unknown[764]: fetched base config from "system" Jul 1 08:43:42.588777 ignition[764]: Ignition finished successfully Jul 1 08:43:42.588343 unknown[764]: fetched user config from "qemu" Jul 1 08:43:42.596093 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:43:42.598017 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 1 08:43:42.598874 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 08:43:42.606856 systemd-networkd[861]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:43:42.635148 ignition[865]: Ignition 2.21.0 Jul 1 08:43:42.635166 ignition[865]: Stage: kargs Jul 1 08:43:42.635349 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:42.635360 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:42.636210 ignition[865]: kargs: kargs passed Jul 1 08:43:42.636264 ignition[865]: Ignition finished successfully Jul 1 08:43:42.642841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 08:43:42.645970 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 08:43:42.674753 ignition[874]: Ignition 2.21.0 Jul 1 08:43:42.674782 ignition[874]: Stage: disks Jul 1 08:43:42.674930 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:42.674943 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:42.676249 ignition[874]: disks: disks passed Jul 1 08:43:42.679205 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 08:43:42.676311 ignition[874]: Ignition finished successfully Jul 1 08:43:42.681045 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 08:43:42.682877 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 08:43:42.683083 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:43:42.683436 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:43:42.683822 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:43:42.715664 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 08:43:42.753915 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 1 08:43:43.014027 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 08:43:43.015208 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 08:43:43.161803 kernel: EXT4-fs (vda9): mounted filesystem 18421243-07cc-41b2-b496-d6a2cef84352 r/w with ordered data mode. Quota mode: none. Jul 1 08:43:43.162472 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 08:43:43.163076 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 08:43:43.166381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:43:43.168108 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 08:43:43.168409 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 08:43:43.168449 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 08:43:43.168468 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:43:43.201974 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 08:43:43.204644 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 08:43:43.210444 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Jul 1 08:43:43.210467 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:43:43.210480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:43:43.210494 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:43:43.214486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:43:43.244797 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 08:43:43.249119 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Jul 1 08:43:43.253540 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 08:43:43.258290 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 08:43:43.344886 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 08:43:43.345988 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 08:43:43.349396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 08:43:43.368053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 08:43:43.369877 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:43:43.383899 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 08:43:43.399909 ignition[1006]: INFO : Ignition 2.21.0 Jul 1 08:43:43.399909 ignition[1006]: INFO : Stage: mount Jul 1 08:43:43.401556 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:43.401556 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:43.404710 ignition[1006]: INFO : mount: mount passed Jul 1 08:43:43.417082 ignition[1006]: INFO : Ignition finished successfully Jul 1 08:43:43.407746 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 08:43:43.418056 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 08:43:43.448219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:43:43.474258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Jul 1 08:43:43.474286 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:43:43.474297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:43:43.475109 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:43:43.478934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:43:43.514065 ignition[1036]: INFO : Ignition 2.21.0 Jul 1 08:43:43.514065 ignition[1036]: INFO : Stage: files Jul 1 08:43:43.516252 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:43.516252 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:43.518934 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Jul 1 08:43:43.518934 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 08:43:43.518934 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 08:43:43.523888 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 08:43:43.523888 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 08:43:43.523888 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 08:43:43.523888 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:43:43.523888 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 1 08:43:43.521578 unknown[1036]: wrote ssh authorized keys file for user: core Jul 1 08:43:43.556164 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 08:43:43.658715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 1 08:43:43.658715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:43:43.662603 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:43:43.674901 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:43:43.674901 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:43:43.674901 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:43:43.680820 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:43:43.680820 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:43:43.680820 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 1 08:43:44.037942 systemd-networkd[861]: eth0: Gained IPv6LL Jul 1 08:43:44.353384 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 1 08:43:45.143447 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 1 08:43:45.143447 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 1 08:43:45.147339 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:43:45.152714 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:43:45.152714 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 1 08:43:45.152714 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 1 08:43:45.159243 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:43:45.161329 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:43:45.161329 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 1 08:43:45.164456 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 1 08:43:45.233960 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:43:45.239506 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:43:45.241252 ignition[1036]: INFO : files: files passed Jul 1 08:43:45.241252 ignition[1036]: INFO : Ignition finished successfully Jul 1 08:43:45.244909 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 08:43:45.248072 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 08:43:45.251527 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 08:43:45.261942 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 08:43:45.262122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 08:43:45.265187 initrd-setup-root-after-ignition[1065]: grep: /sysroot/oem/oem-release: No such file or directory Jul 1 08:43:45.269673 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:43:45.271412 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:43:45.272995 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:43:45.276575 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:43:45.278110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 08:43:45.281376 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 08:43:45.332637 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 08:43:45.333839 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 08:43:45.336603 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 08:43:45.337722 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 08:43:45.339777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 08:43:45.340837 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 08:43:45.384239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:43:45.387148 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 08:43:45.418904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:43:45.420191 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:43:45.422361 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 08:43:45.424384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 08:43:45.424527 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:43:45.426873 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 08:43:45.428309 systemd[1]: Stopped target basic.target - Basic System. Jul 1 08:43:45.430266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 08:43:45.432265 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:43:45.434184 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 08:43:45.436293 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:43:45.438474 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 08:43:45.440486 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:43:45.442725 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 08:43:45.444636 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 08:43:45.446752 systemd[1]: Stopped target swap.target - Swaps. Jul 1 08:43:45.448491 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 08:43:45.448631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:43:45.450870 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:43:45.452247 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:43:45.454245 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 08:43:45.454415 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:43:45.456411 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 08:43:45.456549 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 08:43:45.458874 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 08:43:45.458992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:43:45.460742 systemd[1]: Stopped target paths.target - Path Units. Jul 1 08:43:45.462442 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 08:43:45.465859 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:43:45.468047 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 08:43:45.470062 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 08:43:45.471906 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 08:43:45.472052 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:43:45.473899 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 08:43:45.474006 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:43:45.476336 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 08:43:45.476455 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:43:45.478438 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 08:43:45.478548 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 08:43:45.481353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 08:43:45.482416 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 08:43:45.482529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:43:45.484587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 08:43:45.486857 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 08:43:45.486990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:43:45.489509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 08:43:45.489620 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:43:45.500123 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 08:43:45.500252 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 08:43:45.615036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 08:43:45.622790 ignition[1092]: INFO : Ignition 2.21.0 Jul 1 08:43:45.622790 ignition[1092]: INFO : Stage: umount Jul 1 08:43:45.625009 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:43:45.625009 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:43:45.627606 ignition[1092]: INFO : umount: umount passed Jul 1 08:43:45.627606 ignition[1092]: INFO : Ignition finished successfully Jul 1 08:43:45.628327 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 08:43:45.628451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 08:43:45.630066 systemd[1]: Stopped target network.target - Network. Jul 1 08:43:45.632214 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 08:43:45.632268 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 08:43:45.638379 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 08:43:45.638441 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 08:43:45.638545 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 08:43:45.638595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 08:43:45.639130 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 08:43:45.639170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 08:43:45.639625 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 08:43:45.639823 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 08:43:45.644990 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 08:43:45.645146 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 08:43:45.649333 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 1 08:43:45.649624 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 08:43:45.649674 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:43:45.655224 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:43:45.660192 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 08:43:45.660332 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 08:43:45.663885 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 1 08:43:45.664079 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 1 08:43:45.665154 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 08:43:45.665192 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:43:45.670164 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 08:43:45.671101 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 08:43:45.671156 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:43:45.671245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:43:45.671285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:43:45.675195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 08:43:45.675265 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 08:43:45.676546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:43:45.677786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:43:45.696077 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 08:43:45.696255 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 08:43:45.698609 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 08:43:45.698794 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:43:45.701272 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 08:43:45.701357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 08:43:45.702572 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 08:43:45.702613 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:43:45.704007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 08:43:45.704069 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:43:45.704779 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 08:43:45.704827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 08:43:45.705651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 08:43:45.705701 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:43:45.707637 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 08:43:45.714702 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 1 08:43:45.714846 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:43:45.719174 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 08:43:45.719315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:43:45.722892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:43:45.722951 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:45.732636 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 08:43:45.732754 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 08:43:45.808848 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 08:43:45.808983 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 08:43:45.811867 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 08:43:45.812934 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 08:43:45.812998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 08:43:45.817308 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 08:43:45.836880 systemd[1]: Switching root. Jul 1 08:43:45.882684 systemd-journald[219]: Journal stopped Jul 1 08:43:47.200552 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jul 1 08:43:47.200654 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 08:43:47.200691 kernel: SELinux: policy capability open_perms=1 Jul 1 08:43:47.200709 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 08:43:47.200725 kernel: SELinux: policy capability always_check_network=0 Jul 1 08:43:47.200739 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 08:43:47.200903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 08:43:47.200933 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 08:43:47.200948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 08:43:47.200968 kernel: SELinux: policy capability userspace_initial_context=0 Jul 1 08:43:47.200984 kernel: audit: type=1403 audit(1751359426.376:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 08:43:47.201003 systemd[1]: Successfully loaded SELinux policy in 62.001ms. Jul 1 08:43:47.201028 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.874ms. Jul 1 08:43:47.201046 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:43:47.201061 systemd[1]: Detected virtualization kvm. Jul 1 08:43:47.201077 systemd[1]: Detected architecture x86-64. Jul 1 08:43:47.201092 systemd[1]: Detected first boot. Jul 1 08:43:47.201107 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:43:47.201122 zram_generator::config[1138]: No configuration found. Jul 1 08:43:47.201139 kernel: Guest personality initialized and is inactive Jul 1 08:43:47.201158 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 1 08:43:47.201172 kernel: Initialized host personality Jul 1 08:43:47.201186 kernel: NET: Registered PF_VSOCK protocol family Jul 1 08:43:47.201202 systemd[1]: Populated /etc with preset unit settings. Jul 1 08:43:47.201219 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 1 08:43:47.201234 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 08:43:47.201250 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 08:43:47.201265 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 08:43:47.201281 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 08:43:47.201301 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 08:43:47.201318 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 08:43:47.201341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 08:43:47.201357 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 08:43:47.201373 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 08:43:47.201389 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 08:43:47.201404 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 08:43:47.201419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:43:47.201442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:43:47.201458 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 08:43:47.201480 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 08:43:47.201508 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 08:43:47.201525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:43:47.201540 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 08:43:47.201555 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:43:47.201571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:43:47.201589 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 08:43:47.201605 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 08:43:47.201621 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 08:43:47.201638 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 08:43:47.201656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:43:47.201673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:43:47.201690 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:43:47.201707 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:43:47.201724 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 08:43:47.201745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 08:43:47.201781 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 1 08:43:47.201798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:43:47.201814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:43:47.201839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:43:47.201856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 08:43:47.201878 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 08:43:47.201894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 08:43:47.201909 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 08:43:47.201929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:47.201946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 08:43:47.201962 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 08:43:47.201978 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 08:43:47.201995 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 08:43:47.202011 systemd[1]: Reached target machines.target - Containers. Jul 1 08:43:47.202028 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 08:43:47.202045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:43:47.202082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:43:47.202099 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 08:43:47.202116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:43:47.202133 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:43:47.202150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:43:47.202176 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 08:43:47.202193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:43:47.202210 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 08:43:47.202227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 08:43:47.202261 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 08:43:47.202288 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 08:43:47.202314 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 08:43:47.202349 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:43:47.202381 kernel: loop: module loaded Jul 1 08:43:47.202408 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:43:47.202445 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:43:47.202470 kernel: fuse: init (API version 7.41) Jul 1 08:43:47.202505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:43:47.202548 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 08:43:47.202578 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 1 08:43:47.202597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:43:47.202626 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 08:43:47.202646 systemd[1]: Stopped verity-setup.service. Jul 1 08:43:47.202662 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:47.202678 kernel: ACPI: bus type drm_connector registered Jul 1 08:43:47.202701 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 08:43:47.202717 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 08:43:47.202732 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 08:43:47.202754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 08:43:47.202787 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 08:43:47.202804 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 08:43:47.202819 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 08:43:47.202869 systemd-journald[1213]: Collecting audit messages is disabled. Jul 1 08:43:47.202898 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:43:47.202915 systemd-journald[1213]: Journal started Jul 1 08:43:47.202952 systemd-journald[1213]: Runtime Journal (/run/log/journal/4470415eff2046b0a5171c9e442c2ec9) is 6M, max 48.5M, 42.4M free. Jul 1 08:43:47.203004 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 08:43:46.934554 systemd[1]: Queued start job for default target multi-user.target. Jul 1 08:43:46.954878 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 1 08:43:46.955387 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 08:43:47.205879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 08:43:47.207830 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:43:47.209240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:43:47.209458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:43:47.211074 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:43:47.211279 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:43:47.212873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:43:47.213082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:43:47.214870 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 08:43:47.215093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 08:43:47.216500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:43:47.216714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:43:47.218223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:43:47.219641 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:43:47.221416 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 08:43:47.223155 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 1 08:43:47.238068 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:43:47.240948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 08:43:47.243265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 08:43:47.244548 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 08:43:47.244660 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:43:47.246755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 1 08:43:47.252885 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 08:43:47.254386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:43:47.255903 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 08:43:47.258534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 08:43:47.260100 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:43:47.262868 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 08:43:47.264059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:43:47.265946 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:43:47.268396 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 08:43:47.272071 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 08:43:47.274788 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 08:43:47.277024 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 08:43:47.285318 systemd-journald[1213]: Time spent on flushing to /var/log/journal/4470415eff2046b0a5171c9e442c2ec9 is 14.990ms for 1066 entries. Jul 1 08:43:47.285318 systemd-journald[1213]: System Journal (/var/log/journal/4470415eff2046b0a5171c9e442c2ec9) is 8M, max 195.6M, 187.6M free. Jul 1 08:43:47.321815 systemd-journald[1213]: Received client request to flush runtime journal. Jul 1 08:43:47.321912 kernel: loop0: detected capacity change from 0 to 114000 Jul 1 08:43:47.290112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:43:47.293187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 08:43:47.299478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:43:47.303674 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 08:43:47.312966 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 1 08:43:47.377654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 08:43:47.382323 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 08:43:47.388083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:43:47.389853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 08:43:47.404709 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 1 08:43:47.410821 kernel: loop1: detected capacity change from 0 to 229808 Jul 1 08:43:47.427157 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 1 08:43:47.427183 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 1 08:43:47.433819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:43:47.440824 kernel: loop2: detected capacity change from 0 to 146336 Jul 1 08:43:47.502799 kernel: loop3: detected capacity change from 0 to 114000 Jul 1 08:43:47.516802 kernel: loop4: detected capacity change from 0 to 229808 Jul 1 08:43:47.527930 kernel: loop5: detected capacity change from 0 to 146336 Jul 1 08:43:47.540885 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 1 08:43:47.541885 (sd-merge)[1279]: Merged extensions into '/usr'. Jul 1 08:43:47.548076 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 08:43:47.548095 systemd[1]: Reloading... Jul 1 08:43:47.615806 zram_generator::config[1305]: No configuration found. Jul 1 08:43:47.723841 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 08:43:47.738403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:43:47.820038 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 08:43:47.820612 systemd[1]: Reloading finished in 271 ms. Jul 1 08:43:47.849541 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 08:43:47.874371 systemd[1]: Starting ensure-sysext.service... Jul 1 08:43:47.876744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:43:47.899527 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 08:43:47.904183 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Jul 1 08:43:47.904201 systemd[1]: Reloading... Jul 1 08:43:47.908535 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 1 08:43:47.908572 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 1 08:43:47.908871 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 08:43:47.909124 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 08:43:47.910113 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 08:43:47.910425 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 1 08:43:47.910519 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jul 1 08:43:47.914946 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:43:47.914957 systemd-tmpfiles[1342]: Skipping /boot Jul 1 08:43:47.925384 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:43:47.925398 systemd-tmpfiles[1342]: Skipping /boot Jul 1 08:43:47.969828 zram_generator::config[1370]: No configuration found. Jul 1 08:43:48.071451 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:43:48.154685 systemd[1]: Reloading finished in 250 ms. Jul 1 08:43:48.179834 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 08:43:48.200412 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:43:48.210013 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:43:48.212977 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 08:43:48.224442 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 08:43:48.228805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:43:48.232371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:43:48.236150 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 08:43:48.240389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:48.240576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:43:48.249636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:43:48.256105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:43:48.259322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:43:48.260712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:43:48.261008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:43:48.270093 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 08:43:48.271226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:48.273191 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 08:43:48.275415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:43:48.275705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:43:48.277444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:43:48.277745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:43:48.280170 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:43:48.280447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:43:48.288139 augenrules[1439]: No rules Jul 1 08:43:48.290283 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:43:48.290663 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:43:48.296302 systemd-udevd[1414]: Using default interface naming scheme 'v255'. Jul 1 08:43:48.298293 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 08:43:48.306813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 08:43:48.313187 systemd[1]: Finished ensure-sysext.service. Jul 1 08:43:48.315529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:48.317395 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:43:48.319205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:43:48.322007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:43:48.324190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:43:48.337272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:43:48.341922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:43:48.343128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:43:48.343176 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:43:48.345251 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 1 08:43:48.347599 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 08:43:48.348809 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 08:43:48.348842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:43:48.349280 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:43:48.351055 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 08:43:48.352454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:43:48.352695 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:43:48.354259 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:43:48.354497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:43:48.370123 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:43:48.372125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:43:48.372410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:43:48.374242 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:43:48.374527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:43:48.385977 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 08:43:48.387376 augenrules[1451]: /sbin/augenrules: No change Jul 1 08:43:48.399845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:43:48.399938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:43:48.406108 augenrules[1512]: No rules Jul 1 08:43:48.409999 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:43:48.410432 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:43:48.441712 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 08:43:48.505102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:43:48.508558 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 08:43:48.527829 kernel: mousedev: PS/2 mouse device common for all mice Jul 1 08:43:48.541480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 08:43:48.564884 systemd-networkd[1482]: lo: Link UP Jul 1 08:43:48.565340 systemd-networkd[1482]: lo: Gained carrier Jul 1 08:43:48.568800 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 1 08:43:48.574646 systemd-networkd[1482]: Enumeration completed Jul 1 08:43:48.575084 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:43:48.575748 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:43:48.575893 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:43:48.576723 systemd-networkd[1482]: eth0: Link UP Jul 1 08:43:48.577000 systemd-networkd[1482]: eth0: Gained carrier Jul 1 08:43:48.577110 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:43:48.578600 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 1 08:43:48.585923 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 08:43:48.589856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 1 08:43:48.591287 kernel: ACPI: button: Power Button [PWRF] Jul 1 08:43:48.591384 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 08:43:48.591919 systemd-networkd[1482]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:43:48.593585 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Jul 1 08:43:50.718995 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 1 08:43:50.719151 systemd-timesyncd[1465]: Initial clock synchronization to Tue 2025-07-01 08:43:50.718836 UTC. Jul 1 08:43:50.726060 systemd-resolved[1412]: Positive Trust Anchors: Jul 1 08:43:50.726081 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:43:50.726123 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:43:50.731380 systemd-resolved[1412]: Defaulting to hostname 'linux'. Jul 1 08:43:50.733522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:43:50.736844 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 1 08:43:50.737201 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 1 08:43:50.737418 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 1 08:43:50.737326 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 1 08:43:50.739394 systemd[1]: Reached target network.target - Network. Jul 1 08:43:50.740515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:43:50.741787 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:43:50.743495 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 08:43:50.744752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 08:43:50.746036 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 1 08:43:50.747487 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 08:43:50.748678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 08:43:50.749964 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 08:43:50.753312 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 08:43:50.753348 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:43:50.754288 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:43:50.756183 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 08:43:50.759070 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 08:43:50.763439 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 1 08:43:50.764877 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 1 08:43:50.766147 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 1 08:43:50.777504 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 08:43:50.779066 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 1 08:43:50.781014 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 08:43:50.782952 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:43:50.785286 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:43:50.786284 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:43:50.786310 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:43:50.789293 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 08:43:50.794440 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 08:43:50.799353 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 08:43:50.805120 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 08:43:50.809700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 08:43:50.810753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 08:43:50.812378 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 1 08:43:50.821406 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 08:43:50.824547 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 08:43:50.827219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 08:43:50.828980 jq[1553]: false Jul 1 08:43:50.832088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 08:43:50.841092 extend-filesystems[1554]: Found /dev/vda6 Jul 1 08:43:50.845727 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 1 08:43:50.846058 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jul 1 08:43:50.847420 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 08:43:50.849368 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 08:43:50.849974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 08:43:50.851771 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 08:43:50.897074 extend-filesystems[1554]: Found /dev/vda9 Jul 1 08:43:50.897074 extend-filesystems[1554]: Checking size of /dev/vda9 Jul 1 08:43:51.038221 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 1 08:43:51.038256 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 1 08:43:50.898222 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 08:43:50.897484 oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 1 08:43:51.062755 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jul 1 08:43:51.062755 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:43:51.062755 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 1 08:43:51.062857 extend-filesystems[1554]: Resized partition /dev/vda9 Jul 1 08:43:50.949608 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 08:43:50.897512 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:43:51.064562 extend-filesystems[1585]: resize2fs 1.47.2 (1-Jan-2025) Jul 1 08:43:51.040557 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 08:43:50.902290 oslogin_cache_refresh[1555]: Refreshing group entry cache Jul 1 08:43:51.066479 update_engine[1567]: I20250701 08:43:50.923621 1567 main.cc:92] Flatcar Update Engine starting Jul 1 08:43:51.041031 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 08:43:51.066909 jq[1568]: true Jul 1 08:43:51.041824 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 08:43:51.043373 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 08:43:51.047716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 08:43:51.055949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 08:43:51.068576 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 1 08:43:51.068576 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 08:43:51.068576 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 1 08:43:51.072212 extend-filesystems[1554]: Resized filesystem in /dev/vda9 Jul 1 08:43:51.071993 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 08:43:51.073823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 08:43:51.075568 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 1 08:43:51.075568 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:43:51.074102 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jul 1 08:43:51.074121 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:43:51.075763 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 1 08:43:51.077259 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 1 08:43:51.129953 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 08:43:51.143791 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 08:43:51.187503 tar[1586]: linux-amd64/LICENSE Jul 1 08:43:51.190831 tar[1586]: linux-amd64/helm Jul 1 08:43:51.191672 jq[1587]: true Jul 1 08:43:51.195960 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 08:43:51.211009 dbus-daemon[1549]: [system] SELinux support is enabled Jul 1 08:43:51.217045 kernel: kvm_amd: TSC scaling supported Jul 1 08:43:51.217083 kernel: kvm_amd: Nested Virtualization enabled Jul 1 08:43:51.217096 kernel: kvm_amd: Nested Paging enabled Jul 1 08:43:51.217108 kernel: kvm_amd: LBR virtualization supported Jul 1 08:43:51.218137 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 1 08:43:51.218201 kernel: kvm_amd: Virtual GIF supported Jul 1 08:43:51.222110 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 08:43:51.234293 update_engine[1567]: I20250701 08:43:51.234194 1567 update_check_scheduler.cc:74] Next update check in 5m32s Jul 1 08:43:51.277643 systemd[1]: Started update-engine.service - Update Engine. Jul 1 08:43:51.280776 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 08:43:51.281936 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 08:43:51.281989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 08:43:51.289343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:43:51.290695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 08:43:51.290730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 08:43:51.295610 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 08:43:51.307677 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:43:51.309433 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (Power Button) Jul 1 08:43:51.309690 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 1 08:43:51.313378 systemd-logind[1566]: New seat seat0. Jul 1 08:43:51.314687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 08:43:51.320869 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 08:43:51.322464 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 08:43:51.324296 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 08:43:51.330030 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 1 08:43:51.332352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 08:43:51.333473 kernel: EDAC MC: Ver: 3.0.0 Jul 1 08:43:51.385102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 08:43:51.392821 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 08:43:51.397629 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 08:43:51.399484 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 08:43:51.440498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:43:51.444486 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 08:43:51.642823 containerd[1589]: time="2025-07-01T08:43:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 1 08:43:51.643786 containerd[1589]: time="2025-07-01T08:43:51.643746584Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 1 08:43:51.660531 containerd[1589]: time="2025-07-01T08:43:51.660399749Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.863µs" Jul 1 08:43:51.660670 containerd[1589]: time="2025-07-01T08:43:51.660577903Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 1 08:43:51.660670 containerd[1589]: time="2025-07-01T08:43:51.660605635Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 1 08:43:51.660948 containerd[1589]: time="2025-07-01T08:43:51.660903834Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 1 08:43:51.661125 containerd[1589]: time="2025-07-01T08:43:51.661109450Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 1 08:43:51.661301 containerd[1589]: time="2025-07-01T08:43:51.661279539Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:43:51.661468 containerd[1589]: time="2025-07-01T08:43:51.661435681Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:43:51.661583 containerd[1589]: time="2025-07-01T08:43:51.661559644Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662246 containerd[1589]: time="2025-07-01T08:43:51.662198462Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662246 containerd[1589]: time="2025-07-01T08:43:51.662227907Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662246 containerd[1589]: time="2025-07-01T08:43:51.662246993Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662356 containerd[1589]: time="2025-07-01T08:43:51.662256901Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662527 containerd[1589]: time="2025-07-01T08:43:51.662481973Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662850 containerd[1589]: time="2025-07-01T08:43:51.662809217Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662902 containerd[1589]: time="2025-07-01T08:43:51.662862226Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:43:51.662902 containerd[1589]: time="2025-07-01T08:43:51.662873487Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 1 08:43:51.662983 containerd[1589]: time="2025-07-01T08:43:51.662918632Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 1 08:43:51.663545 containerd[1589]: time="2025-07-01T08:43:51.663459506Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 1 08:43:51.663755 containerd[1589]: time="2025-07-01T08:43:51.663718962Z" level=info msg="metadata content store policy set" policy=shared Jul 1 08:43:51.670915 containerd[1589]: time="2025-07-01T08:43:51.670867526Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 1 08:43:51.670986 containerd[1589]: time="2025-07-01T08:43:51.670937227Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 1 08:43:51.670986 containerd[1589]: time="2025-07-01T08:43:51.670959709Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 1 08:43:51.670986 containerd[1589]: time="2025-07-01T08:43:51.670976100Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 1 08:43:51.671051 containerd[1589]: time="2025-07-01T08:43:51.670993743Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 1 08:43:51.671051 containerd[1589]: time="2025-07-01T08:43:51.671006547Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 1 08:43:51.671051 containerd[1589]: time="2025-07-01T08:43:51.671020222Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 1 08:43:51.671051 containerd[1589]: time="2025-07-01T08:43:51.671036052Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 1 08:43:51.671051 containerd[1589]: time="2025-07-01T08:43:51.671051210Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 1 08:43:51.671188 containerd[1589]: time="2025-07-01T08:43:51.671065858Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 1 08:43:51.671188 containerd[1589]: time="2025-07-01T08:43:51.671078472Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 1 08:43:51.671188 containerd[1589]: time="2025-07-01T08:43:51.671094512Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 1 08:43:51.671383 containerd[1589]: time="2025-07-01T08:43:51.671348458Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 1 08:43:51.671418 containerd[1589]: time="2025-07-01T08:43:51.671395686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 1 08:43:51.671448 containerd[1589]: time="2025-07-01T08:43:51.671433678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 1 08:43:51.671469 containerd[1589]: time="2025-07-01T08:43:51.671449658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 1 08:43:51.671469 containerd[1589]: time="2025-07-01T08:43:51.671464565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 1 08:43:51.671506 containerd[1589]: time="2025-07-01T08:43:51.671480485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 1 08:43:51.671541 containerd[1589]: time="2025-07-01T08:43:51.671510492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 1 08:43:51.671541 containerd[1589]: time="2025-07-01T08:43:51.671524949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 1 08:43:51.671584 containerd[1589]: time="2025-07-01T08:43:51.671553843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 1 08:43:51.671584 containerd[1589]: time="2025-07-01T08:43:51.671570083Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 1 08:43:51.671620 containerd[1589]: time="2025-07-01T08:43:51.671584250Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 1 08:43:51.671707 containerd[1589]: time="2025-07-01T08:43:51.671686241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 1 08:43:51.671734 containerd[1589]: time="2025-07-01T08:43:51.671713152Z" level=info msg="Start snapshots syncer" Jul 1 08:43:51.671767 containerd[1589]: time="2025-07-01T08:43:51.671749390Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 1 08:43:51.672202 containerd[1589]: time="2025-07-01T08:43:51.672125765Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 1 08:43:51.672373 containerd[1589]: time="2025-07-01T08:43:51.672236683Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 1 08:43:51.672373 containerd[1589]: time="2025-07-01T08:43:51.672343684Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 1 08:43:51.672837 containerd[1589]: time="2025-07-01T08:43:51.672794088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 1 08:43:51.672885 containerd[1589]: time="2025-07-01T08:43:51.672840966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 1 08:43:51.672885 containerd[1589]: time="2025-07-01T08:43:51.672856816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 1 08:43:51.672885 containerd[1589]: time="2025-07-01T08:43:51.672869089Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 1 08:43:51.672944 containerd[1589]: time="2025-07-01T08:43:51.672889788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 1 08:43:51.673066 containerd[1589]: time="2025-07-01T08:43:51.673035411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 1 08:43:51.673066 containerd[1589]: time="2025-07-01T08:43:51.673060197Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 1 08:43:51.673114 containerd[1589]: time="2025-07-01T08:43:51.673102006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 1 08:43:51.673135 containerd[1589]: time="2025-07-01T08:43:51.673118386Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 1 08:43:51.673155 containerd[1589]: time="2025-07-01T08:43:51.673131852Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 1 08:43:51.673253 containerd[1589]: time="2025-07-01T08:43:51.673223403Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:43:51.673279 containerd[1589]: time="2025-07-01T08:43:51.673253961Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:43:51.673279 containerd[1589]: time="2025-07-01T08:43:51.673267696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673283035Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673293795Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673306279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673319864Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673348328Z" level=info msg="runtime interface created" Jul 1 08:43:51.673351 containerd[1589]: time="2025-07-01T08:43:51.673356262Z" level=info msg="created NRI interface" Jul 1 08:43:51.673472 containerd[1589]: time="2025-07-01T08:43:51.673367363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 1 08:43:51.673472 containerd[1589]: time="2025-07-01T08:43:51.673380879Z" level=info msg="Connect containerd service" Jul 1 08:43:51.673472 containerd[1589]: time="2025-07-01T08:43:51.673413740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 08:43:51.674591 containerd[1589]: time="2025-07-01T08:43:51.674548608Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:43:51.933932 containerd[1589]: time="2025-07-01T08:43:51.933761893Z" level=info msg="Start subscribing containerd event" Jul 1 08:43:51.934071 containerd[1589]: time="2025-07-01T08:43:51.933980984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 08:43:51.934113 containerd[1589]: time="2025-07-01T08:43:51.934070582Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 08:43:51.934334 containerd[1589]: time="2025-07-01T08:43:51.934261319Z" level=info msg="Start recovering state" Jul 1 08:43:51.934564 containerd[1589]: time="2025-07-01T08:43:51.934538359Z" level=info msg="Start event monitor" Jul 1 08:43:51.934594 containerd[1589]: time="2025-07-01T08:43:51.934566532Z" level=info msg="Start cni network conf syncer for default" Jul 1 08:43:51.934594 containerd[1589]: time="2025-07-01T08:43:51.934587180Z" level=info msg="Start streaming server" Jul 1 08:43:51.934650 containerd[1589]: time="2025-07-01T08:43:51.934626574Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 1 08:43:51.934650 containerd[1589]: time="2025-07-01T08:43:51.934644518Z" level=info msg="runtime interface starting up..." Jul 1 08:43:51.934707 containerd[1589]: time="2025-07-01T08:43:51.934653244Z" level=info msg="starting plugins..." Jul 1 08:43:51.934707 containerd[1589]: time="2025-07-01T08:43:51.934679173Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 1 08:43:51.936658 containerd[1589]: time="2025-07-01T08:43:51.936598732Z" level=info msg="containerd successfully booted in 0.294630s" Jul 1 08:43:51.936832 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 08:43:51.951286 tar[1586]: linux-amd64/README.md Jul 1 08:43:51.978646 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 08:43:52.497427 systemd-networkd[1482]: eth0: Gained IPv6LL Jul 1 08:43:52.500764 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 08:43:52.502942 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 08:43:52.506197 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 1 08:43:52.509099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:43:52.531949 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 08:43:52.552855 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 1 08:43:52.553217 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 1 08:43:52.555024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 08:43:52.558723 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 08:43:53.599725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:43:53.601688 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 08:43:53.603257 systemd[1]: Startup finished in 3.531s (kernel) + 6.729s (initrd) + 5.164s (userspace) = 15.425s. Jul 1 08:43:53.606085 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:43:54.057560 kubelet[1697]: E0701 08:43:54.057472 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:43:54.062252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:43:54.062463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:43:54.062912 systemd[1]: kubelet.service: Consumed 1.316s CPU time, 267.6M memory peak. Jul 1 08:43:54.911011 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 08:43:54.912630 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:60810.service - OpenSSH per-connection server daemon (10.0.0.1:60810). Jul 1 08:43:54.986427 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 60810 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:54.988547 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:55.003523 systemd-logind[1566]: New session 1 of user core. Jul 1 08:43:55.005310 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 08:43:55.006909 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 08:43:55.040251 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 08:43:55.043448 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 08:43:55.065420 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 08:43:55.068599 systemd-logind[1566]: New session c1 of user core. Jul 1 08:43:55.267339 systemd[1715]: Queued start job for default target default.target. Jul 1 08:43:55.288084 systemd[1715]: Created slice app.slice - User Application Slice. Jul 1 08:43:55.288123 systemd[1715]: Reached target paths.target - Paths. Jul 1 08:43:55.288197 systemd[1715]: Reached target timers.target - Timers. Jul 1 08:43:55.290147 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 08:43:55.304715 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 08:43:55.304929 systemd[1715]: Reached target sockets.target - Sockets. Jul 1 08:43:55.305005 systemd[1715]: Reached target basic.target - Basic System. Jul 1 08:43:55.305063 systemd[1715]: Reached target default.target - Main User Target. Jul 1 08:43:55.305109 systemd[1715]: Startup finished in 227ms. Jul 1 08:43:55.305200 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 08:43:55.307123 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 08:43:55.374040 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:60814.service - OpenSSH per-connection server daemon (10.0.0.1:60814). Jul 1 08:43:55.441198 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 60814 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:55.443336 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:55.448437 systemd-logind[1566]: New session 2 of user core. Jul 1 08:43:55.467520 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 08:43:55.526515 sshd[1729]: Connection closed by 10.0.0.1 port 60814 Jul 1 08:43:55.527121 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:55.548353 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:60814.service: Deactivated successfully. Jul 1 08:43:55.550542 systemd[1]: session-2.scope: Deactivated successfully. Jul 1 08:43:55.551533 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Jul 1 08:43:55.554819 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:60816.service - OpenSSH per-connection server daemon (10.0.0.1:60816). Jul 1 08:43:55.555613 systemd-logind[1566]: Removed session 2. Jul 1 08:43:55.618478 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 60816 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:55.620408 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:55.626723 systemd-logind[1566]: New session 3 of user core. Jul 1 08:43:55.644518 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 08:43:55.697623 sshd[1738]: Connection closed by 10.0.0.1 port 60816 Jul 1 08:43:55.697943 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:55.709765 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:60816.service: Deactivated successfully. Jul 1 08:43:55.712015 systemd[1]: session-3.scope: Deactivated successfully. Jul 1 08:43:55.712949 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Jul 1 08:43:55.716058 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:60818.service - OpenSSH per-connection server daemon (10.0.0.1:60818). Jul 1 08:43:55.717132 systemd-logind[1566]: Removed session 3. Jul 1 08:43:55.791497 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 60818 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:55.793107 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:55.798871 systemd-logind[1566]: New session 4 of user core. Jul 1 08:43:55.808496 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 08:43:55.866274 sshd[1747]: Connection closed by 10.0.0.1 port 60818 Jul 1 08:43:55.866748 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:55.879208 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:60818.service: Deactivated successfully. Jul 1 08:43:55.881022 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 08:43:55.881865 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Jul 1 08:43:55.884485 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:60828.service - OpenSSH per-connection server daemon (10.0.0.1:60828). Jul 1 08:43:55.885251 systemd-logind[1566]: Removed session 4. Jul 1 08:43:55.957829 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 60828 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:55.959624 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:55.964748 systemd-logind[1566]: New session 5 of user core. Jul 1 08:43:55.975363 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 08:43:56.036756 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 08:43:56.037162 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:43:56.060397 sudo[1757]: pam_unix(sudo:session): session closed for user root Jul 1 08:43:56.062398 sshd[1756]: Connection closed by 10.0.0.1 port 60828 Jul 1 08:43:56.062826 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:56.075881 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:60828.service: Deactivated successfully. Jul 1 08:43:56.078064 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 08:43:56.079058 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Jul 1 08:43:56.082370 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:60844.service - OpenSSH per-connection server daemon (10.0.0.1:60844). Jul 1 08:43:56.083221 systemd-logind[1566]: Removed session 5. Jul 1 08:43:56.153341 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 60844 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:56.154832 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:56.159531 systemd-logind[1566]: New session 6 of user core. Jul 1 08:43:56.169337 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 08:43:56.223186 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 08:43:56.223579 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:43:56.232350 sudo[1768]: pam_unix(sudo:session): session closed for user root Jul 1 08:43:56.239323 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 1 08:43:56.239643 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:43:56.252360 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:43:56.301249 augenrules[1790]: No rules Jul 1 08:43:56.303362 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:43:56.303674 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:43:56.304959 sudo[1767]: pam_unix(sudo:session): session closed for user root Jul 1 08:43:56.306676 sshd[1766]: Connection closed by 10.0.0.1 port 60844 Jul 1 08:43:56.307036 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jul 1 08:43:56.324666 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:60844.service: Deactivated successfully. Jul 1 08:43:56.326720 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 08:43:56.327544 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Jul 1 08:43:56.330274 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:60856.service - OpenSSH per-connection server daemon (10.0.0.1:60856). Jul 1 08:43:56.330821 systemd-logind[1566]: Removed session 6. Jul 1 08:43:56.406104 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 60856 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:43:56.407963 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:43:56.412932 systemd-logind[1566]: New session 7 of user core. Jul 1 08:43:56.426497 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 08:43:56.481438 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 08:43:56.481762 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:43:57.455812 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 08:43:57.477902 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 08:43:58.079986 dockerd[1824]: time="2025-07-01T08:43:58.079900201Z" level=info msg="Starting up" Jul 1 08:43:58.081553 dockerd[1824]: time="2025-07-01T08:43:58.081510490Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 1 08:44:00.160326 dockerd[1824]: time="2025-07-01T08:44:00.160090507Z" level=info msg="Loading containers: start." Jul 1 08:44:00.172226 kernel: Initializing XFRM netlink socket Jul 1 08:44:01.259720 systemd-networkd[1482]: docker0: Link UP Jul 1 08:44:01.724685 dockerd[1824]: time="2025-07-01T08:44:01.724603699Z" level=info msg="Loading containers: done." Jul 1 08:44:01.753407 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1396505129-merged.mount: Deactivated successfully. Jul 1 08:44:01.986793 dockerd[1824]: time="2025-07-01T08:44:01.986625690Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 08:44:01.986793 dockerd[1824]: time="2025-07-01T08:44:01.986741617Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 1 08:44:01.987398 dockerd[1824]: time="2025-07-01T08:44:01.987358794Z" level=info msg="Initializing buildkit" Jul 1 08:44:02.574631 dockerd[1824]: time="2025-07-01T08:44:02.574544287Z" level=info msg="Completed buildkit initialization" Jul 1 08:44:02.580319 dockerd[1824]: time="2025-07-01T08:44:02.580267929Z" level=info msg="Daemon has completed initialization" Jul 1 08:44:02.580450 dockerd[1824]: time="2025-07-01T08:44:02.580355052Z" level=info msg="API listen on /run/docker.sock" Jul 1 08:44:02.580609 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 08:44:03.152752 containerd[1589]: time="2025-07-01T08:44:03.152696673Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 1 08:44:04.114408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 08:44:04.116268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:04.378289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:04.383124 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:44:04.482452 kubelet[2045]: E0701 08:44:04.482363 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:44:04.489315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:44:04.489534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:44:04.489942 systemd[1]: kubelet.service: Consumed 254ms CPU time, 110.9M memory peak. Jul 1 08:44:04.592960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883956981.mount: Deactivated successfully. Jul 1 08:44:06.800653 containerd[1589]: time="2025-07-01T08:44:06.800593889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:06.885536 containerd[1589]: time="2025-07-01T08:44:06.885437982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 1 08:44:06.943932 containerd[1589]: time="2025-07-01T08:44:06.943844604Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:07.031986 containerd[1589]: time="2025-07-01T08:44:07.031928121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:07.032993 containerd[1589]: time="2025-07-01T08:44:07.032967469Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.88021353s" Jul 1 08:44:07.033065 containerd[1589]: time="2025-07-01T08:44:07.033001373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 1 08:44:07.033601 containerd[1589]: time="2025-07-01T08:44:07.033570240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 1 08:44:10.134495 containerd[1589]: time="2025-07-01T08:44:10.134406707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:10.135721 containerd[1589]: time="2025-07-01T08:44:10.135663504Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 1 08:44:10.137308 containerd[1589]: time="2025-07-01T08:44:10.137253695Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:10.141024 containerd[1589]: time="2025-07-01T08:44:10.140981194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:10.142011 containerd[1589]: time="2025-07-01T08:44:10.141978454Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.108378378s" Jul 1 08:44:10.142074 containerd[1589]: time="2025-07-01T08:44:10.142015594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 1 08:44:10.142573 containerd[1589]: time="2025-07-01T08:44:10.142529247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 1 08:44:14.272895 containerd[1589]: time="2025-07-01T08:44:14.272812647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.288235 containerd[1589]: time="2025-07-01T08:44:14.288155245Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 1 08:44:14.300239 containerd[1589]: time="2025-07-01T08:44:14.300180933Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.318267 containerd[1589]: time="2025-07-01T08:44:14.318185541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:14.319181 containerd[1589]: time="2025-07-01T08:44:14.319123801Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 4.17656078s" Jul 1 08:44:14.319181 containerd[1589]: time="2025-07-01T08:44:14.319177111Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 1 08:44:14.319734 containerd[1589]: time="2025-07-01T08:44:14.319639768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 1 08:44:14.614411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 08:44:14.616043 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:14.818250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:14.835507 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:44:14.891288 kubelet[2124]: E0701 08:44:14.891111 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:44:14.896895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:44:14.897144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:44:14.897588 systemd[1]: kubelet.service: Consumed 233ms CPU time, 111.1M memory peak. Jul 1 08:44:17.437886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168200021.mount: Deactivated successfully. Jul 1 08:44:18.239621 containerd[1589]: time="2025-07-01T08:44:18.239541282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:18.240489 containerd[1589]: time="2025-07-01T08:44:18.240428095Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 1 08:44:18.241965 containerd[1589]: time="2025-07-01T08:44:18.241916185Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:18.244020 containerd[1589]: time="2025-07-01T08:44:18.243978562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:18.244721 containerd[1589]: time="2025-07-01T08:44:18.244667354Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 3.924999213s" Jul 1 08:44:18.244721 containerd[1589]: time="2025-07-01T08:44:18.244703511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 1 08:44:18.245234 containerd[1589]: time="2025-07-01T08:44:18.245143286Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 1 08:44:18.840189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893693126.mount: Deactivated successfully. Jul 1 08:44:20.248639 containerd[1589]: time="2025-07-01T08:44:20.248558669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:20.249530 containerd[1589]: time="2025-07-01T08:44:20.249465359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 1 08:44:20.250769 containerd[1589]: time="2025-07-01T08:44:20.250743706Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:20.254422 containerd[1589]: time="2025-07-01T08:44:20.254355218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:20.255492 containerd[1589]: time="2025-07-01T08:44:20.255440533Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.010239619s" Jul 1 08:44:20.255492 containerd[1589]: time="2025-07-01T08:44:20.255485126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 1 08:44:20.256072 containerd[1589]: time="2025-07-01T08:44:20.256017224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 1 08:44:21.492597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988494878.mount: Deactivated successfully. Jul 1 08:44:21.498891 containerd[1589]: time="2025-07-01T08:44:21.498844946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:44:21.499682 containerd[1589]: time="2025-07-01T08:44:21.499650206Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 1 08:44:21.500839 containerd[1589]: time="2025-07-01T08:44:21.500804510Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:44:21.503337 containerd[1589]: time="2025-07-01T08:44:21.503276856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:44:21.504064 containerd[1589]: time="2025-07-01T08:44:21.503942975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.247865998s" Jul 1 08:44:21.504064 containerd[1589]: time="2025-07-01T08:44:21.503982439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 1 08:44:21.504751 containerd[1589]: time="2025-07-01T08:44:21.504706707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 1 08:44:22.069135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759082751.mount: Deactivated successfully. Jul 1 08:44:24.981762 containerd[1589]: time="2025-07-01T08:44:24.981694862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:24.982627 containerd[1589]: time="2025-07-01T08:44:24.982581228Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 1 08:44:24.984335 containerd[1589]: time="2025-07-01T08:44:24.984275236Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:24.987854 containerd[1589]: time="2025-07-01T08:44:24.987820316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:24.989712 containerd[1589]: time="2025-07-01T08:44:24.989662610Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.484913232s" Jul 1 08:44:24.989712 containerd[1589]: time="2025-07-01T08:44:24.989707566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 1 08:44:25.114447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 1 08:44:25.116472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:25.395335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:25.406442 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:44:25.487646 kubelet[2282]: E0701 08:44:25.487573 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:44:25.492133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:44:25.492334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:44:25.492713 systemd[1]: kubelet.service: Consumed 284ms CPU time, 108.6M memory peak. Jul 1 08:44:27.971601 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:27.971771 systemd[1]: kubelet.service: Consumed 284ms CPU time, 108.6M memory peak. Jul 1 08:44:27.974192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:28.001019 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-7.scope)... Jul 1 08:44:28.001041 systemd[1]: Reloading... Jul 1 08:44:28.121199 zram_generator::config[2344]: No configuration found. Jul 1 08:44:28.457263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:44:28.617556 systemd[1]: Reloading finished in 615 ms. Jul 1 08:44:28.693779 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 08:44:28.693906 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 08:44:28.694327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:28.694385 systemd[1]: kubelet.service: Consumed 201ms CPU time, 98.2M memory peak. Jul 1 08:44:28.696793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:28.946101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:28.961689 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:44:29.009981 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:44:29.009981 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:44:29.009981 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:44:29.010422 kubelet[2392]: I0701 08:44:29.010011 2392 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:44:29.280476 kubelet[2392]: I0701 08:44:29.280364 2392 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:44:29.280476 kubelet[2392]: I0701 08:44:29.280397 2392 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:44:29.280623 kubelet[2392]: I0701 08:44:29.280605 2392 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:44:29.314130 kubelet[2392]: E0701 08:44:29.314056 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 1 08:44:29.314986 kubelet[2392]: I0701 08:44:29.314956 2392 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:44:29.325508 kubelet[2392]: I0701 08:44:29.325469 2392 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:44:29.331316 kubelet[2392]: I0701 08:44:29.331266 2392 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:44:29.331553 kubelet[2392]: I0701 08:44:29.331508 2392 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:44:29.331734 kubelet[2392]: I0701 08:44:29.331536 2392 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:44:29.331734 kubelet[2392]: I0701 08:44:29.331733 2392 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:44:29.331905 kubelet[2392]: I0701 08:44:29.331743 2392 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:44:29.332781 kubelet[2392]: I0701 08:44:29.332745 2392 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:44:29.335523 kubelet[2392]: I0701 08:44:29.335487 2392 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:44:29.335523 kubelet[2392]: I0701 08:44:29.335509 2392 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:44:29.337177 kubelet[2392]: I0701 08:44:29.337138 2392 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:44:29.338476 kubelet[2392]: I0701 08:44:29.338431 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:44:29.341455 kubelet[2392]: E0701 08:44:29.341367 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:44:29.341684 kubelet[2392]: E0701 08:44:29.341632 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:44:29.344156 kubelet[2392]: I0701 08:44:29.344109 2392 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:44:29.344635 kubelet[2392]: I0701 08:44:29.344615 2392 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:44:29.345442 kubelet[2392]: W0701 08:44:29.345415 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 08:44:29.348347 kubelet[2392]: I0701 08:44:29.348316 2392 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:44:29.348421 kubelet[2392]: I0701 08:44:29.348370 2392 server.go:1289] "Started kubelet" Jul 1 08:44:29.348585 kubelet[2392]: I0701 08:44:29.348532 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:44:29.348814 kubelet[2392]: I0701 08:44:29.348773 2392 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:44:29.349451 kubelet[2392]: I0701 08:44:29.349248 2392 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:44:29.349710 kubelet[2392]: I0701 08:44:29.349683 2392 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:44:29.349946 kubelet[2392]: I0701 08:44:29.349911 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:44:29.350281 kubelet[2392]: I0701 08:44:29.350239 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:44:29.353398 kubelet[2392]: E0701 08:44:29.353352 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.353398 kubelet[2392]: I0701 08:44:29.353399 2392 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:44:29.353678 kubelet[2392]: I0701 08:44:29.353613 2392 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:44:29.355483 kubelet[2392]: I0701 08:44:29.355456 2392 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:44:29.355878 kubelet[2392]: E0701 08:44:29.352912 2392 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.127:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.127:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184e142884863efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:44:29.34833945 +0000 UTC m=+0.378738066,LastTimestamp:2025-07-01 08:44:29.34833945 +0000 UTC m=+0.378738066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:44:29.356179 kubelet[2392]: E0701 08:44:29.356139 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:44:29.356642 kubelet[2392]: I0701 08:44:29.356585 2392 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:44:29.356642 kubelet[2392]: E0701 08:44:29.356624 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="200ms" Jul 1 08:44:29.356831 kubelet[2392]: E0701 08:44:29.356743 2392 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:44:29.356932 kubelet[2392]: I0701 08:44:29.356910 2392 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:44:29.359685 kubelet[2392]: I0701 08:44:29.359656 2392 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:44:29.374817 kubelet[2392]: I0701 08:44:29.374777 2392 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:44:29.374817 kubelet[2392]: I0701 08:44:29.374794 2392 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:44:29.374817 kubelet[2392]: I0701 08:44:29.374810 2392 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:44:29.376346 kubelet[2392]: I0701 08:44:29.376300 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:44:29.378155 kubelet[2392]: I0701 08:44:29.378131 2392 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:44:29.378155 kubelet[2392]: I0701 08:44:29.378153 2392 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:44:29.378263 kubelet[2392]: I0701 08:44:29.378184 2392 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:44:29.378263 kubelet[2392]: I0701 08:44:29.378192 2392 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:44:29.378263 kubelet[2392]: E0701 08:44:29.378239 2392 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:44:29.378987 kubelet[2392]: E0701 08:44:29.378902 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:44:29.454397 kubelet[2392]: E0701 08:44:29.454350 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.479293 kubelet[2392]: E0701 08:44:29.479237 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 08:44:29.554691 kubelet[2392]: E0701 08:44:29.554528 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.557303 kubelet[2392]: E0701 08:44:29.557250 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="400ms" Jul 1 08:44:29.655628 kubelet[2392]: E0701 08:44:29.655568 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.680051 kubelet[2392]: E0701 08:44:29.679991 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 08:44:29.756570 kubelet[2392]: E0701 08:44:29.756499 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.857051 kubelet[2392]: E0701 08:44:29.856885 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.957859 kubelet[2392]: E0701 08:44:29.957788 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:29.958270 kubelet[2392]: E0701 08:44:29.958225 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="800ms" Jul 1 08:44:30.058974 kubelet[2392]: E0701 08:44:30.058900 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.080423 kubelet[2392]: E0701 08:44:30.080346 2392 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 1 08:44:30.160095 kubelet[2392]: E0701 08:44:30.160006 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.260935 kubelet[2392]: E0701 08:44:30.260873 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.361845 kubelet[2392]: E0701 08:44:30.361780 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.462625 kubelet[2392]: E0701 08:44:30.462503 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.529384 kubelet[2392]: E0701 08:44:30.529302 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 1 08:44:30.536359 kubelet[2392]: I0701 08:44:30.536315 2392 policy_none.go:49] "None policy: Start" Jul 1 08:44:30.536359 kubelet[2392]: I0701 08:44:30.536344 2392 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:44:30.536359 kubelet[2392]: I0701 08:44:30.536360 2392 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:44:30.545756 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 08:44:30.556643 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 08:44:30.560038 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 08:44:30.563260 kubelet[2392]: E0701 08:44:30.563213 2392 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:44:30.580025 kubelet[2392]: E0701 08:44:30.579950 2392 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:44:30.580464 kubelet[2392]: I0701 08:44:30.580268 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:44:30.580464 kubelet[2392]: I0701 08:44:30.580289 2392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:44:30.580558 kubelet[2392]: I0701 08:44:30.580529 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:44:30.581673 kubelet[2392]: E0701 08:44:30.581649 2392 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:44:30.581763 kubelet[2392]: E0701 08:44:30.581689 2392 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 1 08:44:30.682265 kubelet[2392]: I0701 08:44:30.682212 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:30.682667 kubelet[2392]: E0701 08:44:30.682586 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 1 08:44:30.715416 kubelet[2392]: E0701 08:44:30.715273 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.127:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 1 08:44:30.760005 kubelet[2392]: E0701 08:44:30.759792 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="1.6s" Jul 1 08:44:30.822026 kubelet[2392]: E0701 08:44:30.821949 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 1 08:44:30.876397 kubelet[2392]: E0701 08:44:30.876287 2392 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 1 08:44:30.884473 kubelet[2392]: I0701 08:44:30.884435 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:30.884955 kubelet[2392]: E0701 08:44:30.884890 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 1 08:44:30.895227 systemd[1]: Created slice kubepods-burstable-pod01ebc87f99935e0ed7e244bc9bbf38f7.slice - libcontainer container kubepods-burstable-pod01ebc87f99935e0ed7e244bc9bbf38f7.slice. Jul 1 08:44:30.915721 kubelet[2392]: E0701 08:44:30.915676 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:30.919062 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 1 08:44:30.932999 kubelet[2392]: E0701 08:44:30.932948 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:30.936051 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 1 08:44:30.937998 kubelet[2392]: E0701 08:44:30.937960 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:30.965762 kubelet[2392]: I0701 08:44:30.965588 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:30.965762 kubelet[2392]: I0701 08:44:30.965630 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:30.965762 kubelet[2392]: I0701 08:44:30.965648 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:30.965762 kubelet[2392]: I0701 08:44:30.965662 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:30.965762 kubelet[2392]: I0701 08:44:30.965677 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:30.966034 kubelet[2392]: I0701 08:44:30.965692 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:30.966034 kubelet[2392]: I0701 08:44:30.965708 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:30.966470 kubelet[2392]: I0701 08:44:30.966398 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:30.966470 kubelet[2392]: I0701 08:44:30.966462 2392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:31.216961 kubelet[2392]: E0701 08:44:31.216799 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:31.217712 containerd[1589]: time="2025-07-01T08:44:31.217661271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:01ebc87f99935e0ed7e244bc9bbf38f7,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:31.234069 kubelet[2392]: E0701 08:44:31.234008 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:31.234805 containerd[1589]: time="2025-07-01T08:44:31.234636326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:31.239014 kubelet[2392]: E0701 08:44:31.238979 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:31.239621 containerd[1589]: time="2025-07-01T08:44:31.239580228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:31.250243 containerd[1589]: time="2025-07-01T08:44:31.250178940Z" level=info msg="connecting to shim ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202" address="unix:///run/containerd/s/be38fd3b5254bd00be5b61b9f387f5336cb2f95bf3e8b9e1ba6633fce39fcd53" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:31.279455 containerd[1589]: time="2025-07-01T08:44:31.279400776Z" level=info msg="connecting to shim f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b" address="unix:///run/containerd/s/c5d8e06a392e3de62adff41d49feb5eefd325ca9476e31c5012c989458180074" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:31.287069 kubelet[2392]: I0701 08:44:31.287004 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:31.287497 kubelet[2392]: E0701 08:44:31.287468 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 1 08:44:31.293391 containerd[1589]: time="2025-07-01T08:44:31.293343972Z" level=info msg="connecting to shim d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43" address="unix:///run/containerd/s/5db871e4431e2d8a86c2d7f180ecbc2c1f7ed35fe235f363abf9e772866ba8f7" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:31.351484 kubelet[2392]: E0701 08:44:31.351395 2392 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.127:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 1 08:44:31.360434 systemd[1]: Started cri-containerd-ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202.scope - libcontainer container ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202. Jul 1 08:44:31.377981 systemd[1]: Started cri-containerd-f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b.scope - libcontainer container f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b. Jul 1 08:44:31.406346 systemd[1]: Started cri-containerd-d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43.scope - libcontainer container d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43. Jul 1 08:44:31.622099 containerd[1589]: time="2025-07-01T08:44:31.622047461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b\"" Jul 1 08:44:31.623354 kubelet[2392]: E0701 08:44:31.623325 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:32.047575 containerd[1589]: time="2025-07-01T08:44:32.047417577Z" level=info msg="CreateContainer within sandbox \"f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 08:44:32.047770 containerd[1589]: time="2025-07-01T08:44:32.047723590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:01ebc87f99935e0ed7e244bc9bbf38f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202\"" Jul 1 08:44:32.048949 kubelet[2392]: E0701 08:44:32.048912 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:32.089195 kubelet[2392]: I0701 08:44:32.089145 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:32.089543 kubelet[2392]: E0701 08:44:32.089518 2392 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 1 08:44:32.238911 containerd[1589]: time="2025-07-01T08:44:32.238765165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43\"" Jul 1 08:44:32.239970 kubelet[2392]: E0701 08:44:32.239931 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:32.309154 containerd[1589]: time="2025-07-01T08:44:32.308715974Z" level=info msg="CreateContainer within sandbox \"ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 08:44:32.320066 containerd[1589]: time="2025-07-01T08:44:32.319876085Z" level=info msg="CreateContainer within sandbox \"d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 08:44:32.338673 containerd[1589]: time="2025-07-01T08:44:32.338610236Z" level=info msg="Container 16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:32.341808 containerd[1589]: time="2025-07-01T08:44:32.341764232Z" level=info msg="Container b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:32.345404 containerd[1589]: time="2025-07-01T08:44:32.345353967Z" level=info msg="Container be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:32.354103 containerd[1589]: time="2025-07-01T08:44:32.354043563Z" level=info msg="CreateContainer within sandbox \"f6e776ae9002a67138e9c63d3f27b02e13740940895fedd7772fefb1dc0d179b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa\"" Jul 1 08:44:32.355043 containerd[1589]: time="2025-07-01T08:44:32.354988111Z" level=info msg="StartContainer for \"b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa\"" Jul 1 08:44:32.355955 containerd[1589]: time="2025-07-01T08:44:32.355910207Z" level=info msg="CreateContainer within sandbox \"ed1e009f9c484048ec4f050557780bf9a9f155caba37d1b14f05c6fbb741e202\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422\"" Jul 1 08:44:32.356453 containerd[1589]: time="2025-07-01T08:44:32.356374851Z" level=info msg="StartContainer for \"16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422\"" Jul 1 08:44:32.356660 containerd[1589]: time="2025-07-01T08:44:32.356635057Z" level=info msg="connecting to shim b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa" address="unix:///run/containerd/s/c5d8e06a392e3de62adff41d49feb5eefd325ca9476e31c5012c989458180074" protocol=ttrpc version=3 Jul 1 08:44:32.357498 containerd[1589]: time="2025-07-01T08:44:32.357463985Z" level=info msg="connecting to shim 16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422" address="unix:///run/containerd/s/be38fd3b5254bd00be5b61b9f387f5336cb2f95bf3e8b9e1ba6633fce39fcd53" protocol=ttrpc version=3 Jul 1 08:44:32.359529 containerd[1589]: time="2025-07-01T08:44:32.359498879Z" level=info msg="CreateContainer within sandbox \"d7a60fc7983213cf2ed0a1162b0ee5386b33b91384bf555f3a06eab1042a5e43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e\"" Jul 1 08:44:32.361299 containerd[1589]: time="2025-07-01T08:44:32.359950310Z" level=info msg="StartContainer for \"be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e\"" Jul 1 08:44:32.361424 kubelet[2392]: E0701 08:44:32.361388 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="3.2s" Jul 1 08:44:32.361677 containerd[1589]: time="2025-07-01T08:44:32.361650907Z" level=info msg="connecting to shim be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e" address="unix:///run/containerd/s/5db871e4431e2d8a86c2d7f180ecbc2c1f7ed35fe235f363abf9e772866ba8f7" protocol=ttrpc version=3 Jul 1 08:44:32.380386 systemd[1]: Started cri-containerd-16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422.scope - libcontainer container 16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422. Jul 1 08:44:32.392541 systemd[1]: Started cri-containerd-b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa.scope - libcontainer container b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa. Jul 1 08:44:32.394738 systemd[1]: Started cri-containerd-be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e.scope - libcontainer container be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e. Jul 1 08:44:32.464187 containerd[1589]: time="2025-07-01T08:44:32.463301376Z" level=info msg="StartContainer for \"16388060aa89484f88b0a7f8a45969cf87b4cd602acf1f8dba63a5389a32b422\" returns successfully" Jul 1 08:44:32.479272 containerd[1589]: time="2025-07-01T08:44:32.479207553Z" level=info msg="StartContainer for \"b40b1e8b1e6270cf51f8ef2f0c7781963d91fa2645c9527e5c934d29c08396aa\" returns successfully" Jul 1 08:44:32.486860 containerd[1589]: time="2025-07-01T08:44:32.486810759Z" level=info msg="StartContainer for \"be1ddbff933acbf3f673c2a8e59d1c37f85ea1ffc2155ce5e9721452be6e5d3e\" returns successfully" Jul 1 08:44:33.401624 kubelet[2392]: E0701 08:44:33.401517 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:33.402282 kubelet[2392]: E0701 08:44:33.401675 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:33.406143 kubelet[2392]: E0701 08:44:33.406112 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:33.406290 kubelet[2392]: E0701 08:44:33.406263 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:33.430710 kubelet[2392]: E0701 08:44:33.406435 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:33.430710 kubelet[2392]: E0701 08:44:33.406505 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:33.693701 kubelet[2392]: I0701 08:44:33.693565 2392 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:34.412401 kubelet[2392]: E0701 08:44:34.412363 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:34.413048 kubelet[2392]: E0701 08:44:34.412985 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:34.414038 kubelet[2392]: E0701 08:44:34.413699 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:34.414038 kubelet[2392]: E0701 08:44:34.413797 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:34.414627 kubelet[2392]: E0701 08:44:34.414612 2392 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 1 08:44:34.414790 kubelet[2392]: E0701 08:44:34.414777 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:34.711561 kubelet[2392]: E0701 08:44:34.711316 2392 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184e142884863efa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:44:29.34833945 +0000 UTC m=+0.378738066,LastTimestamp:2025-07-01 08:44:29.34833945 +0000 UTC m=+0.378738066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:44:34.770719 kubelet[2392]: E0701 08:44:34.770539 2392 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184e142885062a42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:44:29.356722754 +0000 UTC m=+0.387121370,LastTimestamp:2025-07-01 08:44:29.356722754 +0000 UTC m=+0.387121370,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:44:34.776572 kubelet[2392]: I0701 08:44:34.776523 2392 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 1 08:44:34.857058 kubelet[2392]: I0701 08:44:34.856986 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:34.865732 kubelet[2392]: E0701 08:44:34.865661 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:34.865732 kubelet[2392]: I0701 08:44:34.865717 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:34.867830 kubelet[2392]: E0701 08:44:34.867799 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:34.868089 kubelet[2392]: I0701 08:44:34.867905 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:34.869926 kubelet[2392]: E0701 08:44:34.869883 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:35.346464 kubelet[2392]: I0701 08:44:35.346382 2392 apiserver.go:52] "Watching apiserver" Jul 1 08:44:35.356319 kubelet[2392]: I0701 08:44:35.356237 2392 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:44:35.410083 kubelet[2392]: I0701 08:44:35.410027 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:35.410267 kubelet[2392]: I0701 08:44:35.410188 2392 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:35.412574 kubelet[2392]: E0701 08:44:35.412533 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:35.412994 kubelet[2392]: E0701 08:44:35.412664 2392 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:35.412994 kubelet[2392]: E0701 08:44:35.412727 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:35.412994 kubelet[2392]: E0701 08:44:35.412864 2392 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:36.181441 update_engine[1567]: I20250701 08:44:36.181337 1567 update_attempter.cc:509] Updating boot flags... Jul 1 08:44:38.404105 systemd[1]: Reload requested from client PID 2695 ('systemctl') (unit session-7.scope)... Jul 1 08:44:38.404120 systemd[1]: Reloading... Jul 1 08:44:38.502240 zram_generator::config[2738]: No configuration found. Jul 1 08:44:38.628530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:44:38.780275 systemd[1]: Reloading finished in 375 ms. Jul 1 08:44:38.812744 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:38.834642 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 08:44:38.834967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:38.835031 systemd[1]: kubelet.service: Consumed 1.086s CPU time, 134.7M memory peak. Jul 1 08:44:38.837009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:44:39.047619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:44:39.059705 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:44:39.106770 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:44:39.106770 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 1 08:44:39.106770 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:44:39.107194 kubelet[2783]: I0701 08:44:39.106821 2783 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:44:39.114941 kubelet[2783]: I0701 08:44:39.114893 2783 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 1 08:44:39.114941 kubelet[2783]: I0701 08:44:39.114919 2783 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:44:39.115158 kubelet[2783]: I0701 08:44:39.115139 2783 server.go:956] "Client rotation is on, will bootstrap in background" Jul 1 08:44:39.116343 kubelet[2783]: I0701 08:44:39.116325 2783 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 1 08:44:39.118567 kubelet[2783]: I0701 08:44:39.118526 2783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:44:39.122686 kubelet[2783]: I0701 08:44:39.122666 2783 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:44:39.128569 kubelet[2783]: I0701 08:44:39.128535 2783 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:44:39.128777 kubelet[2783]: I0701 08:44:39.128744 2783 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:44:39.128952 kubelet[2783]: I0701 08:44:39.128782 2783 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:44:39.129034 kubelet[2783]: I0701 08:44:39.128962 2783 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:44:39.129034 kubelet[2783]: I0701 08:44:39.128970 2783 container_manager_linux.go:303] "Creating device plugin manager" Jul 1 08:44:39.129034 kubelet[2783]: I0701 08:44:39.129014 2783 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:44:39.129209 kubelet[2783]: I0701 08:44:39.129197 2783 kubelet.go:480] "Attempting to sync node with API server" Jul 1 08:44:39.129244 kubelet[2783]: I0701 08:44:39.129214 2783 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:44:39.129244 kubelet[2783]: I0701 08:44:39.129233 2783 kubelet.go:386] "Adding apiserver pod source" Jul 1 08:44:39.129244 kubelet[2783]: I0701 08:44:39.129247 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:44:39.133183 kubelet[2783]: I0701 08:44:39.131521 2783 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:44:39.133183 kubelet[2783]: I0701 08:44:39.131967 2783 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 1 08:44:39.138297 kubelet[2783]: I0701 08:44:39.138264 2783 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 1 08:44:39.138356 kubelet[2783]: I0701 08:44:39.138321 2783 server.go:1289] "Started kubelet" Jul 1 08:44:39.138979 kubelet[2783]: I0701 08:44:39.138929 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:44:39.139386 kubelet[2783]: I0701 08:44:39.139363 2783 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:44:39.139532 kubelet[2783]: I0701 08:44:39.139508 2783 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:44:39.141143 kubelet[2783]: I0701 08:44:39.141125 2783 server.go:317] "Adding debug handlers to kubelet server" Jul 1 08:44:39.141811 kubelet[2783]: I0701 08:44:39.141777 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:44:39.142646 kubelet[2783]: I0701 08:44:39.142628 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:44:39.143317 kubelet[2783]: I0701 08:44:39.143298 2783 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 1 08:44:39.143425 kubelet[2783]: I0701 08:44:39.143410 2783 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 1 08:44:39.143647 kubelet[2783]: I0701 08:44:39.143632 2783 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:44:39.145396 kubelet[2783]: I0701 08:44:39.145364 2783 factory.go:223] Registration of the systemd container factory successfully Jul 1 08:44:39.145603 kubelet[2783]: I0701 08:44:39.145440 2783 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:44:39.146191 kubelet[2783]: E0701 08:44:39.146133 2783 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:44:39.147697 kubelet[2783]: I0701 08:44:39.147673 2783 factory.go:223] Registration of the containerd container factory successfully Jul 1 08:44:39.152102 kubelet[2783]: I0701 08:44:39.152019 2783 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 1 08:44:39.160535 kubelet[2783]: I0701 08:44:39.160484 2783 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 1 08:44:39.161002 kubelet[2783]: I0701 08:44:39.160847 2783 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 1 08:44:39.161799 kubelet[2783]: I0701 08:44:39.161674 2783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 1 08:44:39.161799 kubelet[2783]: I0701 08:44:39.161687 2783 kubelet.go:2436] "Starting kubelet main sync loop" Jul 1 08:44:39.161799 kubelet[2783]: E0701 08:44:39.161770 2783 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:44:39.187540 kubelet[2783]: I0701 08:44:39.187505 2783 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 1 08:44:39.187540 kubelet[2783]: I0701 08:44:39.187524 2783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 1 08:44:39.187540 kubelet[2783]: I0701 08:44:39.187549 2783 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:44:39.187726 kubelet[2783]: I0701 08:44:39.187685 2783 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 1 08:44:39.187726 kubelet[2783]: I0701 08:44:39.187694 2783 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 1 08:44:39.187726 kubelet[2783]: I0701 08:44:39.187710 2783 policy_none.go:49] "None policy: Start" Jul 1 08:44:39.187726 kubelet[2783]: I0701 08:44:39.187718 2783 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 1 08:44:39.187726 kubelet[2783]: I0701 08:44:39.187728 2783 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:44:39.187868 kubelet[2783]: I0701 08:44:39.187850 2783 state_mem.go:75] "Updated machine memory state" Jul 1 08:44:39.191821 kubelet[2783]: E0701 08:44:39.191791 2783 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 1 08:44:39.192130 kubelet[2783]: I0701 08:44:39.191956 2783 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:44:39.192130 kubelet[2783]: I0701 08:44:39.191972 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:44:39.192394 kubelet[2783]: I0701 08:44:39.192368 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:44:39.194982 kubelet[2783]: E0701 08:44:39.193343 2783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 1 08:44:39.262848 kubelet[2783]: I0701 08:44:39.262811 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:39.263141 kubelet[2783]: I0701 08:44:39.262900 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.263331 kubelet[2783]: I0701 08:44:39.262952 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:39.297422 kubelet[2783]: I0701 08:44:39.297388 2783 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 1 08:44:39.445294 kubelet[2783]: I0701 08:44:39.445238 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.445294 kubelet[2783]: I0701 08:44:39.445283 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.445482 kubelet[2783]: I0701 08:44:39.445310 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.445482 kubelet[2783]: I0701 08:44:39.445338 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:44:39.445482 kubelet[2783]: I0701 08:44:39.445357 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:39.445482 kubelet[2783]: I0701 08:44:39.445375 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:39.445482 kubelet[2783]: I0701 08:44:39.445397 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01ebc87f99935e0ed7e244bc9bbf38f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"01ebc87f99935e0ed7e244bc9bbf38f7\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:39.445599 kubelet[2783]: I0701 08:44:39.445421 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.445599 kubelet[2783]: I0701 08:44:39.445444 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:44:39.501196 kubelet[2783]: I0701 08:44:39.500962 2783 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 1 08:44:39.502861 kubelet[2783]: I0701 08:44:39.502805 2783 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 1 08:44:39.793291 kubelet[2783]: E0701 08:44:39.792978 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:39.793291 kubelet[2783]: E0701 08:44:39.793085 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:39.793291 kubelet[2783]: E0701 08:44:39.793135 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:40.130697 kubelet[2783]: I0701 08:44:40.130641 2783 apiserver.go:52] "Watching apiserver" Jul 1 08:44:40.143550 kubelet[2783]: I0701 08:44:40.143510 2783 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 1 08:44:40.174645 kubelet[2783]: E0701 08:44:40.174112 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:40.175352 kubelet[2783]: E0701 08:44:40.175297 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:40.175964 kubelet[2783]: I0701 08:44:40.175943 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:40.235816 kubelet[2783]: E0701 08:44:40.235737 2783 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 1 08:44:40.236017 kubelet[2783]: E0701 08:44:40.235985 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:40.249945 kubelet[2783]: I0701 08:44:40.249822 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.249806475 podStartE2EDuration="1.249806475s" podCreationTimestamp="2025-07-01 08:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:40.249718799 +0000 UTC m=+1.184972056" watchObservedRunningTime="2025-07-01 08:44:40.249806475 +0000 UTC m=+1.185059732" Jul 1 08:44:40.264630 kubelet[2783]: I0701 08:44:40.264537 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.264516611 podStartE2EDuration="1.264516611s" podCreationTimestamp="2025-07-01 08:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:40.264361749 +0000 UTC m=+1.199615016" watchObservedRunningTime="2025-07-01 08:44:40.264516611 +0000 UTC m=+1.199769858" Jul 1 08:44:40.283872 kubelet[2783]: I0701 08:44:40.283771 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.283749858 podStartE2EDuration="1.283749858s" podCreationTimestamp="2025-07-01 08:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:40.274972778 +0000 UTC m=+1.210226045" watchObservedRunningTime="2025-07-01 08:44:40.283749858 +0000 UTC m=+1.219003115" Jul 1 08:44:41.175373 kubelet[2783]: E0701 08:44:41.175335 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:41.175373 kubelet[2783]: E0701 08:44:41.175335 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:43.423259 kubelet[2783]: I0701 08:44:43.423218 2783 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 1 08:44:43.423793 kubelet[2783]: I0701 08:44:43.423774 2783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 1 08:44:43.423823 containerd[1589]: time="2025-07-01T08:44:43.423573408Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 1 08:44:44.479868 systemd[1]: Created slice kubepods-besteffort-pod8d5f923b_37b5_4450_9bd4_625f6cfeea0e.slice - libcontainer container kubepods-besteffort-pod8d5f923b_37b5_4450_9bd4_625f6cfeea0e.slice. Jul 1 08:44:44.577219 kubelet[2783]: I0701 08:44:44.577011 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d5f923b-37b5-4450-9bd4-625f6cfeea0e-lib-modules\") pod \"kube-proxy-csv89\" (UID: \"8d5f923b-37b5-4450-9bd4-625f6cfeea0e\") " pod="kube-system/kube-proxy-csv89" Jul 1 08:44:44.577219 kubelet[2783]: I0701 08:44:44.577074 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d5f923b-37b5-4450-9bd4-625f6cfeea0e-kube-proxy\") pod \"kube-proxy-csv89\" (UID: \"8d5f923b-37b5-4450-9bd4-625f6cfeea0e\") " pod="kube-system/kube-proxy-csv89" Jul 1 08:44:44.577219 kubelet[2783]: I0701 08:44:44.577092 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d5f923b-37b5-4450-9bd4-625f6cfeea0e-xtables-lock\") pod \"kube-proxy-csv89\" (UID: \"8d5f923b-37b5-4450-9bd4-625f6cfeea0e\") " pod="kube-system/kube-proxy-csv89" Jul 1 08:44:44.577904 kubelet[2783]: I0701 08:44:44.577118 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9zq\" (UniqueName: \"kubernetes.io/projected/8d5f923b-37b5-4450-9bd4-625f6cfeea0e-kube-api-access-tf9zq\") pod \"kube-proxy-csv89\" (UID: \"8d5f923b-37b5-4450-9bd4-625f6cfeea0e\") " pod="kube-system/kube-proxy-csv89" Jul 1 08:44:44.595609 systemd[1]: Created slice kubepods-besteffort-pod121cccaf_4ba8_4462_8bbb_5fe6efd83373.slice - libcontainer container kubepods-besteffort-pod121cccaf_4ba8_4462_8bbb_5fe6efd83373.slice. Jul 1 08:44:44.678540 kubelet[2783]: I0701 08:44:44.678480 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/121cccaf-4ba8-4462-8bbb-5fe6efd83373-var-lib-calico\") pod \"tigera-operator-747864d56d-2g4ds\" (UID: \"121cccaf-4ba8-4462-8bbb-5fe6efd83373\") " pod="tigera-operator/tigera-operator-747864d56d-2g4ds" Jul 1 08:44:44.678713 kubelet[2783]: I0701 08:44:44.678587 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6fbf\" (UniqueName: \"kubernetes.io/projected/121cccaf-4ba8-4462-8bbb-5fe6efd83373-kube-api-access-w6fbf\") pod \"tigera-operator-747864d56d-2g4ds\" (UID: \"121cccaf-4ba8-4462-8bbb-5fe6efd83373\") " pod="tigera-operator/tigera-operator-747864d56d-2g4ds" Jul 1 08:44:44.792747 kubelet[2783]: E0701 08:44:44.792593 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:44.795919 containerd[1589]: time="2025-07-01T08:44:44.795246847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csv89,Uid:8d5f923b-37b5-4450-9bd4-625f6cfeea0e,Namespace:kube-system,Attempt:0,}" Jul 1 08:44:44.846947 containerd[1589]: time="2025-07-01T08:44:44.846894799Z" level=info msg="connecting to shim 2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85" address="unix:///run/containerd/s/f6e796cf72bba6a26f68c87633ed4658f1600c3f9412e8378c6646fa6b1de7a8" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:44.861655 kubelet[2783]: E0701 08:44:44.861550 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:44.882454 systemd[1]: Started cri-containerd-2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85.scope - libcontainer container 2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85. Jul 1 08:44:44.900014 containerd[1589]: time="2025-07-01T08:44:44.899820705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2g4ds,Uid:121cccaf-4ba8-4462-8bbb-5fe6efd83373,Namespace:tigera-operator,Attempt:0,}" Jul 1 08:44:44.915644 containerd[1589]: time="2025-07-01T08:44:44.915577720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-csv89,Uid:8d5f923b-37b5-4450-9bd4-625f6cfeea0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85\"" Jul 1 08:44:44.916697 kubelet[2783]: E0701 08:44:44.916622 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:44.926193 containerd[1589]: time="2025-07-01T08:44:44.924639111Z" level=info msg="CreateContainer within sandbox \"2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 1 08:44:44.933644 containerd[1589]: time="2025-07-01T08:44:44.933579294Z" level=info msg="connecting to shim f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b" address="unix:///run/containerd/s/2e970d9ed3b418d3f8fb1f517582f51c89beb859871e783fb4837fb3c31fdb68" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:44:44.940189 containerd[1589]: time="2025-07-01T08:44:44.939940957Z" level=info msg="Container 4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:44.951337 containerd[1589]: time="2025-07-01T08:44:44.951292023Z" level=info msg="CreateContainer within sandbox \"2a2c9a349fa7c6c0a694d6f7a0684bffaac3e851f0e14466d22a651f4e8aac85\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79\"" Jul 1 08:44:44.952422 containerd[1589]: time="2025-07-01T08:44:44.952389695Z" level=info msg="StartContainer for \"4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79\"" Jul 1 08:44:44.954512 containerd[1589]: time="2025-07-01T08:44:44.954474452Z" level=info msg="connecting to shim 4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79" address="unix:///run/containerd/s/f6e796cf72bba6a26f68c87633ed4658f1600c3f9412e8378c6646fa6b1de7a8" protocol=ttrpc version=3 Jul 1 08:44:44.965422 systemd[1]: Started cri-containerd-f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b.scope - libcontainer container f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b. Jul 1 08:44:44.973309 systemd[1]: Started cri-containerd-4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79.scope - libcontainer container 4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79. Jul 1 08:44:45.020814 containerd[1589]: time="2025-07-01T08:44:45.020607021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2g4ds,Uid:121cccaf-4ba8-4462-8bbb-5fe6efd83373,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b\"" Jul 1 08:44:45.023428 containerd[1589]: time="2025-07-01T08:44:45.023390154Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 1 08:44:45.030889 containerd[1589]: time="2025-07-01T08:44:45.030834906Z" level=info msg="StartContainer for \"4e48e66d2f2bdaf9d9ba1500a82e0491f551ef2aa3ad5f8c29e477fd08ec4a79\" returns successfully" Jul 1 08:44:45.184826 kubelet[2783]: E0701 08:44:45.184553 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:45.185068 kubelet[2783]: E0701 08:44:45.184966 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:45.212683 kubelet[2783]: I0701 08:44:45.212601 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-csv89" podStartSLOduration=1.212582525 podStartE2EDuration="1.212582525s" podCreationTimestamp="2025-07-01 08:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:44:45.202617266 +0000 UTC m=+6.137870523" watchObservedRunningTime="2025-07-01 08:44:45.212582525 +0000 UTC m=+6.147835782" Jul 1 08:44:46.485234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount137509961.mount: Deactivated successfully. Jul 1 08:44:47.228678 containerd[1589]: time="2025-07-01T08:44:47.228604295Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:47.229580 containerd[1589]: time="2025-07-01T08:44:47.229516165Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 1 08:44:47.230889 containerd[1589]: time="2025-07-01T08:44:47.230822759Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:47.236670 containerd[1589]: time="2025-07-01T08:44:47.236584376Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:44:47.237239 containerd[1589]: time="2025-07-01T08:44:47.237155543Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.213727548s" Jul 1 08:44:47.237239 containerd[1589]: time="2025-07-01T08:44:47.237222309Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 1 08:44:47.243940 containerd[1589]: time="2025-07-01T08:44:47.243827567Z" level=info msg="CreateContainer within sandbox \"f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 1 08:44:47.260999 containerd[1589]: time="2025-07-01T08:44:47.259281543Z" level=info msg="Container 7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:47.273090 containerd[1589]: time="2025-07-01T08:44:47.273004846Z" level=info msg="CreateContainer within sandbox \"f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\"" Jul 1 08:44:47.273688 containerd[1589]: time="2025-07-01T08:44:47.273611099Z" level=info msg="StartContainer for \"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\"" Jul 1 08:44:47.274661 containerd[1589]: time="2025-07-01T08:44:47.274633758Z" level=info msg="connecting to shim 7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363" address="unix:///run/containerd/s/2e970d9ed3b418d3f8fb1f517582f51c89beb859871e783fb4837fb3c31fdb68" protocol=ttrpc version=3 Jul 1 08:44:47.322668 systemd[1]: Started cri-containerd-7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363.scope - libcontainer container 7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363. Jul 1 08:44:47.360942 containerd[1589]: time="2025-07-01T08:44:47.360895161Z" level=info msg="StartContainer for \"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\" returns successfully" Jul 1 08:44:48.404694 kubelet[2783]: I0701 08:44:48.404548 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2g4ds" podStartSLOduration=2.188847751 podStartE2EDuration="4.404529045s" podCreationTimestamp="2025-07-01 08:44:44 +0000 UTC" firstStartedPulling="2025-07-01 08:44:45.022467312 +0000 UTC m=+5.957720569" lastFinishedPulling="2025-07-01 08:44:47.238148606 +0000 UTC m=+8.173401863" observedRunningTime="2025-07-01 08:44:48.404360266 +0000 UTC m=+9.339613524" watchObservedRunningTime="2025-07-01 08:44:48.404529045 +0000 UTC m=+9.339782332" Jul 1 08:44:48.821053 kubelet[2783]: E0701 08:44:48.820878 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:49.196719 kubelet[2783]: E0701 08:44:49.196649 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:49.434250 kubelet[2783]: E0701 08:44:49.434202 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:50.198855 kubelet[2783]: E0701 08:44:50.198810 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:51.203481 kubelet[2783]: E0701 08:44:51.203424 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:44:51.262866 systemd[1]: cri-containerd-7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363.scope: Deactivated successfully. Jul 1 08:44:51.264238 containerd[1589]: time="2025-07-01T08:44:51.264078563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\" id:\"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\" pid:3129 exit_status:1 exited_at:{seconds:1751359491 nanos:263448227}" Jul 1 08:44:51.265034 containerd[1589]: time="2025-07-01T08:44:51.264732706Z" level=info msg="received exit event container_id:\"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\" id:\"7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363\" pid:3129 exit_status:1 exited_at:{seconds:1751359491 nanos:263448227}" Jul 1 08:44:51.293247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363-rootfs.mount: Deactivated successfully. Jul 1 08:44:53.208450 kubelet[2783]: I0701 08:44:53.208414 2783 scope.go:117] "RemoveContainer" containerID="7b245da9c61eb643ef28d896c39405d3b5a21f982c7d174b6d458c745cc5d363" Jul 1 08:44:53.210095 containerd[1589]: time="2025-07-01T08:44:53.210005901Z" level=info msg="CreateContainer within sandbox \"f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 1 08:44:53.222321 containerd[1589]: time="2025-07-01T08:44:53.222266602Z" level=info msg="Container 6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:44:53.231376 containerd[1589]: time="2025-07-01T08:44:53.231320105Z" level=info msg="CreateContainer within sandbox \"f16f68277dba513fa44b7d7db5f8d90305f4d1627b0ca9565165f4daa1e1449b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba\"" Jul 1 08:44:53.232050 containerd[1589]: time="2025-07-01T08:44:53.231979316Z" level=info msg="StartContainer for \"6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba\"" Jul 1 08:44:53.232853 containerd[1589]: time="2025-07-01T08:44:53.232809799Z" level=info msg="connecting to shim 6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba" address="unix:///run/containerd/s/2e970d9ed3b418d3f8fb1f517582f51c89beb859871e783fb4837fb3c31fdb68" protocol=ttrpc version=3 Jul 1 08:44:53.260416 systemd[1]: Started cri-containerd-6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba.scope - libcontainer container 6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba. Jul 1 08:44:53.291285 containerd[1589]: time="2025-07-01T08:44:53.291247007Z" level=info msg="StartContainer for \"6b227fe4f42e8f75d80c0ff4a8e1b5b6f1aeb42b3773706d229ba2cd68f3c8ba\" returns successfully" Jul 1 08:44:55.162508 sudo[1803]: pam_unix(sudo:session): session closed for user root Jul 1 08:44:55.164334 sshd[1802]: Connection closed by 10.0.0.1 port 60856 Jul 1 08:44:55.165204 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jul 1 08:44:55.169618 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:60856.service: Deactivated successfully. Jul 1 08:44:55.171925 systemd[1]: session-7.scope: Deactivated successfully. Jul 1 08:44:55.172260 systemd[1]: session-7.scope: Consumed 6.022s CPU time, 226.6M memory peak. Jul 1 08:44:55.174484 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Jul 1 08:44:55.176277 systemd-logind[1566]: Removed session 7. Jul 1 08:45:00.250394 systemd[1]: Created slice kubepods-besteffort-pod35e1eb54_87c4_45b5_94b4_a318a6eb11a0.slice - libcontainer container kubepods-besteffort-pod35e1eb54_87c4_45b5_94b4_a318a6eb11a0.slice. Jul 1 08:45:00.275116 kubelet[2783]: I0701 08:45:00.275035 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35e1eb54-87c4-45b5-94b4-a318a6eb11a0-tigera-ca-bundle\") pod \"calico-typha-558f7cdcdd-nsgcz\" (UID: \"35e1eb54-87c4-45b5-94b4-a318a6eb11a0\") " pod="calico-system/calico-typha-558f7cdcdd-nsgcz" Jul 1 08:45:00.275116 kubelet[2783]: I0701 08:45:00.275085 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/35e1eb54-87c4-45b5-94b4-a318a6eb11a0-typha-certs\") pod \"calico-typha-558f7cdcdd-nsgcz\" (UID: \"35e1eb54-87c4-45b5-94b4-a318a6eb11a0\") " pod="calico-system/calico-typha-558f7cdcdd-nsgcz" Jul 1 08:45:00.275116 kubelet[2783]: I0701 08:45:00.275105 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4qr\" (UniqueName: \"kubernetes.io/projected/35e1eb54-87c4-45b5-94b4-a318a6eb11a0-kube-api-access-zk4qr\") pod \"calico-typha-558f7cdcdd-nsgcz\" (UID: \"35e1eb54-87c4-45b5-94b4-a318a6eb11a0\") " pod="calico-system/calico-typha-558f7cdcdd-nsgcz" Jul 1 08:45:00.555125 systemd[1]: Created slice kubepods-besteffort-pod1207ffe5_b7dd_48f1_b2ab_0a64e6d6b87f.slice - libcontainer container kubepods-besteffort-pod1207ffe5_b7dd_48f1_b2ab_0a64e6d6b87f.slice. Jul 1 08:45:00.557068 kubelet[2783]: E0701 08:45:00.555814 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:00.557920 containerd[1589]: time="2025-07-01T08:45:00.557775948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558f7cdcdd-nsgcz,Uid:35e1eb54-87c4-45b5-94b4-a318a6eb11a0,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:00.577722 kubelet[2783]: I0701 08:45:00.577641 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-var-lib-calico\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577722 kubelet[2783]: I0701 08:45:00.577721 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-tigera-ca-bundle\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577898 kubelet[2783]: I0701 08:45:00.577741 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-var-run-calico\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577898 kubelet[2783]: I0701 08:45:00.577764 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-cni-bin-dir\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577898 kubelet[2783]: I0701 08:45:00.577781 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-policysync\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577898 kubelet[2783]: I0701 08:45:00.577800 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-cni-net-dir\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.577898 kubelet[2783]: I0701 08:45:00.577820 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-node-certs\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.578073 kubelet[2783]: I0701 08:45:00.577838 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-xtables-lock\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.578073 kubelet[2783]: I0701 08:45:00.577862 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fngr\" (UniqueName: \"kubernetes.io/projected/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-kube-api-access-5fngr\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.578073 kubelet[2783]: I0701 08:45:00.577889 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-cni-log-dir\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.578073 kubelet[2783]: I0701 08:45:00.577905 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-flexvol-driver-host\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.578073 kubelet[2783]: I0701 08:45:00.577924 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f-lib-modules\") pod \"calico-node-kdmdh\" (UID: \"1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f\") " pod="calico-system/calico-node-kdmdh" Jul 1 08:45:00.602217 containerd[1589]: time="2025-07-01T08:45:00.602131366Z" level=info msg="connecting to shim 9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291" address="unix:///run/containerd/s/8fa82fb3909d8da6eb8d216aa3de1da3ba759219a6f8954dceae70add2bf001d" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:00.632363 systemd[1]: Started cri-containerd-9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291.scope - libcontainer container 9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291. Jul 1 08:45:00.682358 kubelet[2783]: E0701 08:45:00.681280 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.682358 kubelet[2783]: W0701 08:45:00.682257 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.683992 kubelet[2783]: E0701 08:45:00.683881 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.684139 kubelet[2783]: E0701 08:45:00.684112 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.684240 kubelet[2783]: W0701 08:45:00.684225 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.684406 kubelet[2783]: E0701 08:45:00.684321 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.684778 kubelet[2783]: E0701 08:45:00.684754 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.684927 kubelet[2783]: W0701 08:45:00.684864 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.684927 kubelet[2783]: E0701 08:45:00.684882 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.685331 kubelet[2783]: E0701 08:45:00.685315 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.685691 kubelet[2783]: W0701 08:45:00.685439 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.688229 kubelet[2783]: E0701 08:45:00.688202 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.688735 kubelet[2783]: E0701 08:45:00.688646 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.688735 kubelet[2783]: W0701 08:45:00.688693 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.688832 kubelet[2783]: E0701 08:45:00.688739 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.689045 kubelet[2783]: E0701 08:45:00.688971 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.689045 kubelet[2783]: W0701 08:45:00.688992 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.689045 kubelet[2783]: E0701 08:45:00.689003 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.689422 kubelet[2783]: E0701 08:45:00.689377 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.689422 kubelet[2783]: W0701 08:45:00.689394 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.689422 kubelet[2783]: E0701 08:45:00.689405 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.689982 kubelet[2783]: E0701 08:45:00.689653 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.689982 kubelet[2783]: W0701 08:45:00.689668 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.689982 kubelet[2783]: E0701 08:45:00.689716 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.691748 kubelet[2783]: E0701 08:45:00.691644 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.691923 kubelet[2783]: W0701 08:45:00.691880 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.691973 kubelet[2783]: E0701 08:45:00.691921 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.693075 kubelet[2783]: E0701 08:45:00.692559 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.693075 kubelet[2783]: W0701 08:45:00.692573 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.693075 kubelet[2783]: E0701 08:45:00.692588 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.696101 kubelet[2783]: E0701 08:45:00.696082 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.696101 kubelet[2783]: W0701 08:45:00.696095 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.696224 kubelet[2783]: E0701 08:45:00.696107 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.696259 containerd[1589]: time="2025-07-01T08:45:00.696107940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-558f7cdcdd-nsgcz,Uid:35e1eb54-87c4-45b5-94b4-a318a6eb11a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291\"" Jul 1 08:45:00.696832 kubelet[2783]: E0701 08:45:00.696696 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:00.698607 containerd[1589]: time="2025-07-01T08:45:00.698540994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 1 08:45:00.843362 kubelet[2783]: E0701 08:45:00.843212 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:00.860409 containerd[1589]: time="2025-07-01T08:45:00.860358974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdmdh,Uid:1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:00.867234 kubelet[2783]: E0701 08:45:00.867015 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.867234 kubelet[2783]: W0701 08:45:00.867039 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.867234 kubelet[2783]: E0701 08:45:00.867069 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.867498 kubelet[2783]: E0701 08:45:00.867438 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.867498 kubelet[2783]: W0701 08:45:00.867447 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.867498 kubelet[2783]: E0701 08:45:00.867459 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.867722 kubelet[2783]: E0701 08:45:00.867687 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.867722 kubelet[2783]: W0701 08:45:00.867698 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.867722 kubelet[2783]: E0701 08:45:00.867707 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.868000 kubelet[2783]: E0701 08:45:00.867981 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.868000 kubelet[2783]: W0701 08:45:00.867992 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.868000 kubelet[2783]: E0701 08:45:00.868001 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.868222 kubelet[2783]: E0701 08:45:00.868204 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.868222 kubelet[2783]: W0701 08:45:00.868214 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.868222 kubelet[2783]: E0701 08:45:00.868221 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.868440 kubelet[2783]: E0701 08:45:00.868391 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.868440 kubelet[2783]: W0701 08:45:00.868399 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.868440 kubelet[2783]: E0701 08:45:00.868410 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.868612 kubelet[2783]: E0701 08:45:00.868593 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.868612 kubelet[2783]: W0701 08:45:00.868604 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.868612 kubelet[2783]: E0701 08:45:00.868614 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.868851 kubelet[2783]: E0701 08:45:00.868832 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.868851 kubelet[2783]: W0701 08:45:00.868842 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.868851 kubelet[2783]: E0701 08:45:00.868851 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.869059 kubelet[2783]: E0701 08:45:00.869030 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.869059 kubelet[2783]: W0701 08:45:00.869039 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.869059 kubelet[2783]: E0701 08:45:00.869047 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.869258 kubelet[2783]: E0701 08:45:00.869241 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.869258 kubelet[2783]: W0701 08:45:00.869251 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.869258 kubelet[2783]: E0701 08:45:00.869258 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.869431 kubelet[2783]: E0701 08:45:00.869415 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.869431 kubelet[2783]: W0701 08:45:00.869424 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.869431 kubelet[2783]: E0701 08:45:00.869432 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.869723 kubelet[2783]: E0701 08:45:00.869666 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.869723 kubelet[2783]: W0701 08:45:00.869710 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.869813 kubelet[2783]: E0701 08:45:00.869745 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.870032 kubelet[2783]: E0701 08:45:00.870015 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.870032 kubelet[2783]: W0701 08:45:00.870031 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.870093 kubelet[2783]: E0701 08:45:00.870042 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.870266 kubelet[2783]: E0701 08:45:00.870249 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.870266 kubelet[2783]: W0701 08:45:00.870262 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.870336 kubelet[2783]: E0701 08:45:00.870271 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.870469 kubelet[2783]: E0701 08:45:00.870453 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.870469 kubelet[2783]: W0701 08:45:00.870465 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.870514 kubelet[2783]: E0701 08:45:00.870475 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.870730 kubelet[2783]: E0701 08:45:00.870710 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.870730 kubelet[2783]: W0701 08:45:00.870725 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.870815 kubelet[2783]: E0701 08:45:00.870738 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.870957 kubelet[2783]: E0701 08:45:00.870941 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.870957 kubelet[2783]: W0701 08:45:00.870954 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.871002 kubelet[2783]: E0701 08:45:00.870964 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.871203 kubelet[2783]: E0701 08:45:00.871151 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.871203 kubelet[2783]: W0701 08:45:00.871179 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.871203 kubelet[2783]: E0701 08:45:00.871189 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.871392 kubelet[2783]: E0701 08:45:00.871374 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.871392 kubelet[2783]: W0701 08:45:00.871386 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.871445 kubelet[2783]: E0701 08:45:00.871399 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.871578 kubelet[2783]: E0701 08:45:00.871562 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.871578 kubelet[2783]: W0701 08:45:00.871576 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.871625 kubelet[2783]: E0701 08:45:00.871586 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.880184 kubelet[2783]: E0701 08:45:00.880115 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.880184 kubelet[2783]: W0701 08:45:00.880144 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.880184 kubelet[2783]: E0701 08:45:00.880193 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.880424 kubelet[2783]: I0701 08:45:00.880225 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cea5ec18-e730-41e6-b2b5-7746f9389260-registration-dir\") pod \"csi-node-driver-z5dkh\" (UID: \"cea5ec18-e730-41e6-b2b5-7746f9389260\") " pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:00.880424 kubelet[2783]: E0701 08:45:00.880417 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.880485 kubelet[2783]: W0701 08:45:00.880429 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.880485 kubelet[2783]: E0701 08:45:00.880441 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.880485 kubelet[2783]: I0701 08:45:00.880463 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cea5ec18-e730-41e6-b2b5-7746f9389260-socket-dir\") pod \"csi-node-driver-z5dkh\" (UID: \"cea5ec18-e730-41e6-b2b5-7746f9389260\") " pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:00.880907 kubelet[2783]: E0701 08:45:00.880869 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.880907 kubelet[2783]: W0701 08:45:00.880904 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.881002 kubelet[2783]: E0701 08:45:00.880931 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.881160 kubelet[2783]: E0701 08:45:00.881142 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.881160 kubelet[2783]: W0701 08:45:00.881154 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.881160 kubelet[2783]: E0701 08:45:00.881175 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.881431 kubelet[2783]: E0701 08:45:00.881414 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.881431 kubelet[2783]: W0701 08:45:00.881424 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.881431 kubelet[2783]: E0701 08:45:00.881432 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.881549 kubelet[2783]: I0701 08:45:00.881471 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cea5ec18-e730-41e6-b2b5-7746f9389260-varrun\") pod \"csi-node-driver-z5dkh\" (UID: \"cea5ec18-e730-41e6-b2b5-7746f9389260\") " pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:00.881750 kubelet[2783]: E0701 08:45:00.881732 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.881750 kubelet[2783]: W0701 08:45:00.881745 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.881872 kubelet[2783]: E0701 08:45:00.881755 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.881985 kubelet[2783]: E0701 08:45:00.881967 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.881985 kubelet[2783]: W0701 08:45:00.881978 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.881985 kubelet[2783]: E0701 08:45:00.881986 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.882203 kubelet[2783]: E0701 08:45:00.882185 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.882203 kubelet[2783]: W0701 08:45:00.882196 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.882203 kubelet[2783]: E0701 08:45:00.882204 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.882400 kubelet[2783]: E0701 08:45:00.882381 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.882400 kubelet[2783]: W0701 08:45:00.882392 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.882400 kubelet[2783]: E0701 08:45:00.882400 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.882607 kubelet[2783]: E0701 08:45:00.882563 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.882607 kubelet[2783]: W0701 08:45:00.882575 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.882607 kubelet[2783]: E0701 08:45:00.882583 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.882607 kubelet[2783]: I0701 08:45:00.882605 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea5ec18-e730-41e6-b2b5-7746f9389260-kubelet-dir\") pod \"csi-node-driver-z5dkh\" (UID: \"cea5ec18-e730-41e6-b2b5-7746f9389260\") " pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:00.882876 kubelet[2783]: E0701 08:45:00.882843 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.882876 kubelet[2783]: W0701 08:45:00.882857 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.882876 kubelet[2783]: E0701 08:45:00.882866 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.883142 kubelet[2783]: I0701 08:45:00.882887 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6264\" (UniqueName: \"kubernetes.io/projected/cea5ec18-e730-41e6-b2b5-7746f9389260-kube-api-access-s6264\") pod \"csi-node-driver-z5dkh\" (UID: \"cea5ec18-e730-41e6-b2b5-7746f9389260\") " pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:00.883228 kubelet[2783]: E0701 08:45:00.883207 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.883258 kubelet[2783]: W0701 08:45:00.883225 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.883258 kubelet[2783]: E0701 08:45:00.883242 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.883435 kubelet[2783]: E0701 08:45:00.883421 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.883435 kubelet[2783]: W0701 08:45:00.883430 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.883492 kubelet[2783]: E0701 08:45:00.883439 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.883649 kubelet[2783]: E0701 08:45:00.883618 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.883649 kubelet[2783]: W0701 08:45:00.883631 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.883649 kubelet[2783]: E0701 08:45:00.883641 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.883908 kubelet[2783]: E0701 08:45:00.883849 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.883908 kubelet[2783]: W0701 08:45:00.883860 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.883908 kubelet[2783]: E0701 08:45:00.883872 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.885906 containerd[1589]: time="2025-07-01T08:45:00.885854159Z" level=info msg="connecting to shim c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6" address="unix:///run/containerd/s/700c49e6c54393dd19755fd8072d0514a2a0dd942b90198c8676a509fdc1b3cd" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:00.915599 systemd[1]: Started cri-containerd-c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6.scope - libcontainer container c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6. Jul 1 08:45:00.975153 containerd[1589]: time="2025-07-01T08:45:00.975081526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdmdh,Uid:1207ffe5-b7dd-48f1-b2ab-0a64e6d6b87f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\"" Jul 1 08:45:00.983652 kubelet[2783]: E0701 08:45:00.983620 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.983652 kubelet[2783]: W0701 08:45:00.983646 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.983821 kubelet[2783]: E0701 08:45:00.983668 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.983996 kubelet[2783]: E0701 08:45:00.983963 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.983996 kubelet[2783]: W0701 08:45:00.983981 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.983996 kubelet[2783]: E0701 08:45:00.983994 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.984310 kubelet[2783]: E0701 08:45:00.984287 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.984310 kubelet[2783]: W0701 08:45:00.984306 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.984372 kubelet[2783]: E0701 08:45:00.984318 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.984520 kubelet[2783]: E0701 08:45:00.984502 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.984520 kubelet[2783]: W0701 08:45:00.984515 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.984569 kubelet[2783]: E0701 08:45:00.984526 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.984773 kubelet[2783]: E0701 08:45:00.984756 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.984773 kubelet[2783]: W0701 08:45:00.984768 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.984838 kubelet[2783]: E0701 08:45:00.984778 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.985026 kubelet[2783]: E0701 08:45:00.985009 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.985026 kubelet[2783]: W0701 08:45:00.985022 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.985076 kubelet[2783]: E0701 08:45:00.985033 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.985264 kubelet[2783]: E0701 08:45:00.985247 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.985264 kubelet[2783]: W0701 08:45:00.985260 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.985336 kubelet[2783]: E0701 08:45:00.985270 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.985649 kubelet[2783]: E0701 08:45:00.985621 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.985693 kubelet[2783]: W0701 08:45:00.985647 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.985693 kubelet[2783]: E0701 08:45:00.985683 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.985891 kubelet[2783]: E0701 08:45:00.985875 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.985891 kubelet[2783]: W0701 08:45:00.985885 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.985891 kubelet[2783]: E0701 08:45:00.985893 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.986145 kubelet[2783]: E0701 08:45:00.986129 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.986145 kubelet[2783]: W0701 08:45:00.986139 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.986145 kubelet[2783]: E0701 08:45:00.986147 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.986435 kubelet[2783]: E0701 08:45:00.986418 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.986435 kubelet[2783]: W0701 08:45:00.986429 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.986491 kubelet[2783]: E0701 08:45:00.986438 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.986652 kubelet[2783]: E0701 08:45:00.986634 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.986652 kubelet[2783]: W0701 08:45:00.986647 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.986714 kubelet[2783]: E0701 08:45:00.986658 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.986908 kubelet[2783]: E0701 08:45:00.986888 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.986908 kubelet[2783]: W0701 08:45:00.986900 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.986908 kubelet[2783]: E0701 08:45:00.986908 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.987113 kubelet[2783]: E0701 08:45:00.987098 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.987113 kubelet[2783]: W0701 08:45:00.987108 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.987158 kubelet[2783]: E0701 08:45:00.987116 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.987328 kubelet[2783]: E0701 08:45:00.987311 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.987328 kubelet[2783]: W0701 08:45:00.987323 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.987386 kubelet[2783]: E0701 08:45:00.987333 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.987524 kubelet[2783]: E0701 08:45:00.987509 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.987524 kubelet[2783]: W0701 08:45:00.987519 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.987577 kubelet[2783]: E0701 08:45:00.987526 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.987727 kubelet[2783]: E0701 08:45:00.987712 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.987727 kubelet[2783]: W0701 08:45:00.987723 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.987791 kubelet[2783]: E0701 08:45:00.987730 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.987981 kubelet[2783]: E0701 08:45:00.987959 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.987981 kubelet[2783]: W0701 08:45:00.987974 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.988072 kubelet[2783]: E0701 08:45:00.987986 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.988272 kubelet[2783]: E0701 08:45:00.988256 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.988272 kubelet[2783]: W0701 08:45:00.988267 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.988334 kubelet[2783]: E0701 08:45:00.988277 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.988497 kubelet[2783]: E0701 08:45:00.988478 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.988497 kubelet[2783]: W0701 08:45:00.988492 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.988553 kubelet[2783]: E0701 08:45:00.988503 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.988739 kubelet[2783]: E0701 08:45:00.988720 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.988739 kubelet[2783]: W0701 08:45:00.988733 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.988808 kubelet[2783]: E0701 08:45:00.988745 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.988997 kubelet[2783]: E0701 08:45:00.988978 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.988997 kubelet[2783]: W0701 08:45:00.988993 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.989039 kubelet[2783]: E0701 08:45:00.989004 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.989277 kubelet[2783]: E0701 08:45:00.989259 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.989277 kubelet[2783]: W0701 08:45:00.989272 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.989333 kubelet[2783]: E0701 08:45:00.989282 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.989587 kubelet[2783]: E0701 08:45:00.989566 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.989587 kubelet[2783]: W0701 08:45:00.989581 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.989653 kubelet[2783]: E0701 08:45:00.989594 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:00.990373 kubelet[2783]: E0701 08:45:00.990313 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:00.990373 kubelet[2783]: W0701 08:45:00.990327 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:00.990373 kubelet[2783]: E0701 08:45:00.990340 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:01.000790 kubelet[2783]: E0701 08:45:01.000733 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:01.000790 kubelet[2783]: W0701 08:45:01.000754 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:01.000790 kubelet[2783]: E0701 08:45:01.000784 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:02.162321 kubelet[2783]: E0701 08:45:02.162237 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:02.341868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006336161.mount: Deactivated successfully. Jul 1 08:45:04.162099 kubelet[2783]: E0701 08:45:04.162034 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:04.333354 containerd[1589]: time="2025-07-01T08:45:04.333265162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:04.334413 containerd[1589]: time="2025-07-01T08:45:04.334188577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 1 08:45:04.335664 containerd[1589]: time="2025-07-01T08:45:04.335631257Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:04.337750 containerd[1589]: time="2025-07-01T08:45:04.337697509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:04.338211 containerd[1589]: time="2025-07-01T08:45:04.338149979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.639213392s" Jul 1 08:45:04.338211 containerd[1589]: time="2025-07-01T08:45:04.338200243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 1 08:45:04.339322 containerd[1589]: time="2025-07-01T08:45:04.339270985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 1 08:45:04.351966 containerd[1589]: time="2025-07-01T08:45:04.351927092Z" level=info msg="CreateContainer within sandbox \"9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 1 08:45:04.360303 containerd[1589]: time="2025-07-01T08:45:04.360257091Z" level=info msg="Container ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:04.369477 containerd[1589]: time="2025-07-01T08:45:04.369429443Z" level=info msg="CreateContainer within sandbox \"9a5458fb6465d74f32333d5174789afad253668e9d542f5bb8aa1715dc541291\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f\"" Jul 1 08:45:04.370139 containerd[1589]: time="2025-07-01T08:45:04.370094854Z" level=info msg="StartContainer for \"ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f\"" Jul 1 08:45:04.371396 containerd[1589]: time="2025-07-01T08:45:04.371318181Z" level=info msg="connecting to shim ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f" address="unix:///run/containerd/s/8fa82fb3909d8da6eb8d216aa3de1da3ba759219a6f8954dceae70add2bf001d" protocol=ttrpc version=3 Jul 1 08:45:04.396348 systemd[1]: Started cri-containerd-ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f.scope - libcontainer container ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f. Jul 1 08:45:04.452390 containerd[1589]: time="2025-07-01T08:45:04.452194343Z" level=info msg="StartContainer for \"ff315cbe20ed2596578e17e81755eb7a3724a8c75b2edbe7ebfa185ff2c4681f\" returns successfully" Jul 1 08:45:05.238980 kubelet[2783]: E0701 08:45:05.238931 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:05.249935 kubelet[2783]: I0701 08:45:05.249861 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-558f7cdcdd-nsgcz" podStartSLOduration=1.6089894519999999 podStartE2EDuration="5.249841854s" podCreationTimestamp="2025-07-01 08:45:00 +0000 UTC" firstStartedPulling="2025-07-01 08:45:00.698212075 +0000 UTC m=+21.633465332" lastFinishedPulling="2025-07-01 08:45:04.339064477 +0000 UTC m=+25.274317734" observedRunningTime="2025-07-01 08:45:05.24932853 +0000 UTC m=+26.184581787" watchObservedRunningTime="2025-07-01 08:45:05.249841854 +0000 UTC m=+26.185095121" Jul 1 08:45:05.300708 kubelet[2783]: E0701 08:45:05.300655 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.300708 kubelet[2783]: W0701 08:45:05.300684 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.300708 kubelet[2783]: E0701 08:45:05.300709 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.300975 kubelet[2783]: E0701 08:45:05.300955 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.300975 kubelet[2783]: W0701 08:45:05.300968 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.301026 kubelet[2783]: E0701 08:45:05.300976 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.301246 kubelet[2783]: E0701 08:45:05.301212 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.301246 kubelet[2783]: W0701 08:45:05.301230 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.301246 kubelet[2783]: E0701 08:45:05.301241 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.301524 kubelet[2783]: E0701 08:45:05.301508 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.301524 kubelet[2783]: W0701 08:45:05.301520 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.301586 kubelet[2783]: E0701 08:45:05.301531 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.301749 kubelet[2783]: E0701 08:45:05.301733 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.301749 kubelet[2783]: W0701 08:45:05.301745 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.301814 kubelet[2783]: E0701 08:45:05.301755 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.301957 kubelet[2783]: E0701 08:45:05.301930 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.301957 kubelet[2783]: W0701 08:45:05.301942 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.301957 kubelet[2783]: E0701 08:45:05.301951 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.302147 kubelet[2783]: E0701 08:45:05.302131 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.302147 kubelet[2783]: W0701 08:45:05.302143 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.302232 kubelet[2783]: E0701 08:45:05.302153 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.302376 kubelet[2783]: E0701 08:45:05.302358 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.302376 kubelet[2783]: W0701 08:45:05.302371 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.302443 kubelet[2783]: E0701 08:45:05.302380 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.302619 kubelet[2783]: E0701 08:45:05.302585 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.302619 kubelet[2783]: W0701 08:45:05.302599 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.302619 kubelet[2783]: E0701 08:45:05.302608 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.302824 kubelet[2783]: E0701 08:45:05.302807 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.302824 kubelet[2783]: W0701 08:45:05.302821 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.302873 kubelet[2783]: E0701 08:45:05.302831 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.303044 kubelet[2783]: E0701 08:45:05.303015 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.303044 kubelet[2783]: W0701 08:45:05.303026 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.303044 kubelet[2783]: E0701 08:45:05.303035 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.303275 kubelet[2783]: E0701 08:45:05.303246 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.303275 kubelet[2783]: W0701 08:45:05.303259 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.303275 kubelet[2783]: E0701 08:45:05.303268 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.303467 kubelet[2783]: E0701 08:45:05.303451 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.303467 kubelet[2783]: W0701 08:45:05.303463 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.303515 kubelet[2783]: E0701 08:45:05.303473 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.303689 kubelet[2783]: E0701 08:45:05.303674 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.303689 kubelet[2783]: W0701 08:45:05.303685 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.303739 kubelet[2783]: E0701 08:45:05.303695 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.303886 kubelet[2783]: E0701 08:45:05.303870 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.303886 kubelet[2783]: W0701 08:45:05.303881 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.303936 kubelet[2783]: E0701 08:45:05.303891 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.319504 kubelet[2783]: E0701 08:45:05.319468 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.319504 kubelet[2783]: W0701 08:45:05.319487 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.319504 kubelet[2783]: E0701 08:45:05.319504 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.319839 kubelet[2783]: E0701 08:45:05.319795 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.319839 kubelet[2783]: W0701 08:45:05.319822 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.319971 kubelet[2783]: E0701 08:45:05.319848 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.320186 kubelet[2783]: E0701 08:45:05.320154 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.320186 kubelet[2783]: W0701 08:45:05.320181 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.320265 kubelet[2783]: E0701 08:45:05.320193 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.320530 kubelet[2783]: E0701 08:45:05.320497 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.320530 kubelet[2783]: W0701 08:45:05.320521 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.320587 kubelet[2783]: E0701 08:45:05.320541 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.320780 kubelet[2783]: E0701 08:45:05.320759 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.320780 kubelet[2783]: W0701 08:45:05.320774 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.320859 kubelet[2783]: E0701 08:45:05.320784 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.320999 kubelet[2783]: E0701 08:45:05.320983 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.320999 kubelet[2783]: W0701 08:45:05.320995 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.321052 kubelet[2783]: E0701 08:45:05.321005 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.321263 kubelet[2783]: E0701 08:45:05.321244 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.321263 kubelet[2783]: W0701 08:45:05.321256 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.321337 kubelet[2783]: E0701 08:45:05.321267 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.321483 kubelet[2783]: E0701 08:45:05.321466 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.321483 kubelet[2783]: W0701 08:45:05.321479 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.321547 kubelet[2783]: E0701 08:45:05.321489 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.321715 kubelet[2783]: E0701 08:45:05.321698 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.321715 kubelet[2783]: W0701 08:45:05.321711 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.321783 kubelet[2783]: E0701 08:45:05.321721 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.321939 kubelet[2783]: E0701 08:45:05.321922 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.321939 kubelet[2783]: W0701 08:45:05.321936 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.321990 kubelet[2783]: E0701 08:45:05.321947 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.322191 kubelet[2783]: E0701 08:45:05.322158 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.322191 kubelet[2783]: W0701 08:45:05.322187 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.322250 kubelet[2783]: E0701 08:45:05.322200 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.322429 kubelet[2783]: E0701 08:45:05.322412 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.322429 kubelet[2783]: W0701 08:45:05.322425 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.322481 kubelet[2783]: E0701 08:45:05.322435 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.322744 kubelet[2783]: E0701 08:45:05.322716 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.322744 kubelet[2783]: W0701 08:45:05.322733 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.322793 kubelet[2783]: E0701 08:45:05.322744 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.322936 kubelet[2783]: E0701 08:45:05.322922 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.322936 kubelet[2783]: W0701 08:45:05.322931 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.322988 kubelet[2783]: E0701 08:45:05.322938 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.323135 kubelet[2783]: E0701 08:45:05.323115 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.323135 kubelet[2783]: W0701 08:45:05.323125 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.323201 kubelet[2783]: E0701 08:45:05.323132 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.323348 kubelet[2783]: E0701 08:45:05.323333 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.323348 kubelet[2783]: W0701 08:45:05.323343 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.323394 kubelet[2783]: E0701 08:45:05.323350 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.323584 kubelet[2783]: E0701 08:45:05.323557 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.323584 kubelet[2783]: W0701 08:45:05.323566 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.323584 kubelet[2783]: E0701 08:45:05.323585 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:05.323902 kubelet[2783]: E0701 08:45:05.323878 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:05.323902 kubelet[2783]: W0701 08:45:05.323889 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:05.323902 kubelet[2783]: E0701 08:45:05.323897 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.163081 kubelet[2783]: E0701 08:45:06.162726 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:06.239870 kubelet[2783]: I0701 08:45:06.239822 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:45:06.240398 kubelet[2783]: E0701 08:45:06.240218 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:06.335454 kubelet[2783]: E0701 08:45:06.335359 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.335796 kubelet[2783]: W0701 08:45:06.335429 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.335796 kubelet[2783]: E0701 08:45:06.335771 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.336858 kubelet[2783]: E0701 08:45:06.336835 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.336919 kubelet[2783]: W0701 08:45:06.336861 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.336919 kubelet[2783]: E0701 08:45:06.336875 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.337328 kubelet[2783]: E0701 08:45:06.337287 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.337485 kubelet[2783]: W0701 08:45:06.337442 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.337485 kubelet[2783]: E0701 08:45:06.337485 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.338743 kubelet[2783]: E0701 08:45:06.338514 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.338743 kubelet[2783]: W0701 08:45:06.338585 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.338743 kubelet[2783]: E0701 08:45:06.338646 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.340205 kubelet[2783]: E0701 08:45:06.339300 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.340205 kubelet[2783]: W0701 08:45:06.339323 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.340205 kubelet[2783]: E0701 08:45:06.339351 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.340205 kubelet[2783]: E0701 08:45:06.339929 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.340205 kubelet[2783]: W0701 08:45:06.339953 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.340205 kubelet[2783]: E0701 08:45:06.339974 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.340450 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.342452 kubelet[2783]: W0701 08:45:06.340474 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.340503 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.341098 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.342452 kubelet[2783]: W0701 08:45:06.341114 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.341182 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.341838 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.342452 kubelet[2783]: W0701 08:45:06.341862 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.342452 kubelet[2783]: E0701 08:45:06.341935 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.342577 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.344402 kubelet[2783]: W0701 08:45:06.342625 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.342636 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.342871 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.344402 kubelet[2783]: W0701 08:45:06.342880 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.342888 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.343267 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.344402 kubelet[2783]: W0701 08:45:06.343277 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.343285 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.344402 kubelet[2783]: E0701 08:45:06.343947 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.345203 kubelet[2783]: W0701 08:45:06.343969 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.345203 kubelet[2783]: E0701 08:45:06.343993 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.345203 kubelet[2783]: E0701 08:45:06.344413 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.345203 kubelet[2783]: W0701 08:45:06.344458 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.345203 kubelet[2783]: E0701 08:45:06.344479 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.345203 kubelet[2783]: E0701 08:45:06.344828 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.345203 kubelet[2783]: W0701 08:45:06.344837 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.345203 kubelet[2783]: E0701 08:45:06.344846 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.345868 kubelet[2783]: E0701 08:45:06.345851 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.345985 kubelet[2783]: W0701 08:45:06.345965 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.345985 kubelet[2783]: E0701 08:45:06.345983 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.346755 kubelet[2783]: E0701 08:45:06.346738 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.346755 kubelet[2783]: W0701 08:45:06.346751 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.346818 kubelet[2783]: E0701 08:45:06.346761 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.346953 kubelet[2783]: E0701 08:45:06.346941 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.346953 kubelet[2783]: W0701 08:45:06.346947 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.347001 kubelet[2783]: E0701 08:45:06.346955 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.347405 kubelet[2783]: E0701 08:45:06.347392 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.347405 kubelet[2783]: W0701 08:45:06.347402 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.347471 kubelet[2783]: E0701 08:45:06.347412 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.347601 kubelet[2783]: E0701 08:45:06.347588 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.347601 kubelet[2783]: W0701 08:45:06.347598 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.347650 kubelet[2783]: E0701 08:45:06.347607 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.347783 kubelet[2783]: E0701 08:45:06.347772 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.347783 kubelet[2783]: W0701 08:45:06.347780 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.347850 kubelet[2783]: E0701 08:45:06.347787 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.347944 kubelet[2783]: E0701 08:45:06.347932 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.347944 kubelet[2783]: W0701 08:45:06.347940 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.347995 kubelet[2783]: E0701 08:45:06.347948 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.348108 kubelet[2783]: E0701 08:45:06.348096 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.348108 kubelet[2783]: W0701 08:45:06.348104 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.348182 kubelet[2783]: E0701 08:45:06.348111 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.348316 kubelet[2783]: E0701 08:45:06.348300 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.348316 kubelet[2783]: W0701 08:45:06.348312 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.348390 kubelet[2783]: E0701 08:45:06.348322 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.348585 kubelet[2783]: E0701 08:45:06.348557 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.348585 kubelet[2783]: W0701 08:45:06.348579 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.348585 kubelet[2783]: E0701 08:45:06.348588 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.349245 kubelet[2783]: E0701 08:45:06.349219 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.349245 kubelet[2783]: W0701 08:45:06.349232 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.349245 kubelet[2783]: E0701 08:45:06.349241 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.349502 kubelet[2783]: E0701 08:45:06.349489 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.349502 kubelet[2783]: W0701 08:45:06.349497 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.349609 kubelet[2783]: E0701 08:45:06.349506 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.349705 kubelet[2783]: E0701 08:45:06.349689 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.349705 kubelet[2783]: W0701 08:45:06.349698 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.349787 kubelet[2783]: E0701 08:45:06.349706 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.349930 kubelet[2783]: E0701 08:45:06.349918 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.349930 kubelet[2783]: W0701 08:45:06.349927 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.349990 kubelet[2783]: E0701 08:45:06.349936 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.350119 kubelet[2783]: E0701 08:45:06.350107 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.350119 kubelet[2783]: W0701 08:45:06.350116 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.350193 kubelet[2783]: E0701 08:45:06.350123 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.350518 kubelet[2783]: E0701 08:45:06.350474 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.350570 kubelet[2783]: W0701 08:45:06.350512 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.350570 kubelet[2783]: E0701 08:45:06.350539 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.350879 kubelet[2783]: E0701 08:45:06.350862 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.350879 kubelet[2783]: W0701 08:45:06.350875 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.350941 kubelet[2783]: E0701 08:45:06.350884 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.351069 kubelet[2783]: E0701 08:45:06.351055 2783 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:45:06.351069 kubelet[2783]: W0701 08:45:06.351063 2783 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:45:06.351110 kubelet[2783]: E0701 08:45:06.351074 2783 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:45:06.540903 containerd[1589]: time="2025-07-01T08:45:06.540727277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:06.560194 containerd[1589]: time="2025-07-01T08:45:06.560116805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 1 08:45:06.562112 containerd[1589]: time="2025-07-01T08:45:06.562070905Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:06.565280 containerd[1589]: time="2025-07-01T08:45:06.565244015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:06.565746 containerd[1589]: time="2025-07-01T08:45:06.565687137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.226371729s" Jul 1 08:45:06.565746 containerd[1589]: time="2025-07-01T08:45:06.565739966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 1 08:45:06.572958 containerd[1589]: time="2025-07-01T08:45:06.572897930Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 1 08:45:06.583646 containerd[1589]: time="2025-07-01T08:45:06.583597507Z" level=info msg="Container 2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:06.592268 containerd[1589]: time="2025-07-01T08:45:06.592215694Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\"" Jul 1 08:45:06.592757 containerd[1589]: time="2025-07-01T08:45:06.592721795Z" level=info msg="StartContainer for \"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\"" Jul 1 08:45:06.594581 containerd[1589]: time="2025-07-01T08:45:06.594539409Z" level=info msg="connecting to shim 2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f" address="unix:///run/containerd/s/700c49e6c54393dd19755fd8072d0514a2a0dd942b90198c8676a509fdc1b3cd" protocol=ttrpc version=3 Jul 1 08:45:06.623386 systemd[1]: Started cri-containerd-2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f.scope - libcontainer container 2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f. Jul 1 08:45:06.670785 containerd[1589]: time="2025-07-01T08:45:06.670719764Z" level=info msg="StartContainer for \"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\" returns successfully" Jul 1 08:45:06.680634 systemd[1]: cri-containerd-2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f.scope: Deactivated successfully. Jul 1 08:45:06.682434 containerd[1589]: time="2025-07-01T08:45:06.682379644Z" level=info msg="received exit event container_id:\"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\" id:\"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\" pid:3577 exited_at:{seconds:1751359506 nanos:681942442}" Jul 1 08:45:06.682558 containerd[1589]: time="2025-07-01T08:45:06.682412726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\" id:\"2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f\" pid:3577 exited_at:{seconds:1751359506 nanos:681942442}" Jul 1 08:45:06.709351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce4f210d0e0f64341d6b3380b103824ee15068625ed14832105f83f4904b41f-rootfs.mount: Deactivated successfully. Jul 1 08:45:07.246207 containerd[1589]: time="2025-07-01T08:45:07.246131634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 1 08:45:08.163003 kubelet[2783]: E0701 08:45:08.162923 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:10.162826 kubelet[2783]: E0701 08:45:10.162767 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:11.528777 containerd[1589]: time="2025-07-01T08:45:11.528705046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:11.530023 containerd[1589]: time="2025-07-01T08:45:11.529951326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 1 08:45:11.531784 containerd[1589]: time="2025-07-01T08:45:11.531712482Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:11.536815 containerd[1589]: time="2025-07-01T08:45:11.536679587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:11.537696 containerd[1589]: time="2025-07-01T08:45:11.537644989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.291434798s" Jul 1 08:45:11.537763 containerd[1589]: time="2025-07-01T08:45:11.537698620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 1 08:45:11.544572 containerd[1589]: time="2025-07-01T08:45:11.544530706Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 1 08:45:11.557585 containerd[1589]: time="2025-07-01T08:45:11.557507808Z" level=info msg="Container 3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:11.570922 containerd[1589]: time="2025-07-01T08:45:11.570834127Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\"" Jul 1 08:45:11.571568 containerd[1589]: time="2025-07-01T08:45:11.571526165Z" level=info msg="StartContainer for \"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\"" Jul 1 08:45:11.573302 containerd[1589]: time="2025-07-01T08:45:11.573276141Z" level=info msg="connecting to shim 3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5" address="unix:///run/containerd/s/700c49e6c54393dd19755fd8072d0514a2a0dd942b90198c8676a509fdc1b3cd" protocol=ttrpc version=3 Jul 1 08:45:11.604568 systemd[1]: Started cri-containerd-3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5.scope - libcontainer container 3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5. Jul 1 08:45:11.659989 containerd[1589]: time="2025-07-01T08:45:11.659940129Z" level=info msg="StartContainer for \"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\" returns successfully" Jul 1 08:45:12.162664 kubelet[2783]: E0701 08:45:12.162584 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:13.031266 containerd[1589]: time="2025-07-01T08:45:13.031194762Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:45:13.033915 systemd[1]: cri-containerd-3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5.scope: Deactivated successfully. Jul 1 08:45:13.034466 systemd[1]: cri-containerd-3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5.scope: Consumed 648ms CPU time, 179.4M memory peak, 3.5M read from disk, 171.2M written to disk. Jul 1 08:45:13.035224 containerd[1589]: time="2025-07-01T08:45:13.035083872Z" level=info msg="received exit event container_id:\"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\" id:\"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\" pid:3637 exited_at:{seconds:1751359513 nanos:34781596}" Jul 1 08:45:13.035345 containerd[1589]: time="2025-07-01T08:45:13.035262909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\" id:\"3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5\" pid:3637 exited_at:{seconds:1751359513 nanos:34781596}" Jul 1 08:45:13.058469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bb179f18e2d9f1275ab4568bbe85061a37add5d48648c7a9e0b3fdeda93d0c5-rootfs.mount: Deactivated successfully. Jul 1 08:45:13.130808 kubelet[2783]: I0701 08:45:13.130770 2783 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 1 08:45:13.457344 systemd[1]: Created slice kubepods-besteffort-podf6b14933_5394_43aa_8cf9_27f4bd274718.slice - libcontainer container kubepods-besteffort-podf6b14933_5394_43aa_8cf9_27f4bd274718.slice. Jul 1 08:45:13.471941 systemd[1]: Created slice kubepods-besteffort-pod39668e56_eacf_4877_9ea0_0f50aa91c90a.slice - libcontainer container kubepods-besteffort-pod39668e56_eacf_4877_9ea0_0f50aa91c90a.slice. Jul 1 08:45:13.481545 systemd[1]: Created slice kubepods-besteffort-podf2dbd54a_1f36_4785_8095_5a4c24a539ed.slice - libcontainer container kubepods-besteffort-podf2dbd54a_1f36_4785_8095_5a4c24a539ed.slice. Jul 1 08:45:13.489197 systemd[1]: Created slice kubepods-besteffort-pod879479c0_ad20_4f01_ad04_7c7296882080.slice - libcontainer container kubepods-besteffort-pod879479c0_ad20_4f01_ad04_7c7296882080.slice. Jul 1 08:45:13.496587 systemd[1]: Created slice kubepods-burstable-pod1e68482e_ac4f_44ab_b782_a089c10516f3.slice - libcontainer container kubepods-burstable-pod1e68482e_ac4f_44ab_b782_a089c10516f3.slice. Jul 1 08:45:13.499900 kubelet[2783]: I0701 08:45:13.499297 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879479c0-ad20-4f01-ad04-7c7296882080-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-jm5hg\" (UID: \"879479c0-ad20-4f01-ad04-7c7296882080\") " pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.501286 kubelet[2783]: I0701 08:45:13.500436 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpwmb\" (UniqueName: \"kubernetes.io/projected/879479c0-ad20-4f01-ad04-7c7296882080-kube-api-access-mpwmb\") pod \"goldmane-768f4c5c69-jm5hg\" (UID: \"879479c0-ad20-4f01-ad04-7c7296882080\") " pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.501286 kubelet[2783]: I0701 08:45:13.500481 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69pz\" (UniqueName: \"kubernetes.io/projected/c1984c03-0869-4331-a84e-10305a971a43-kube-api-access-m69pz\") pod \"coredns-674b8bbfcf-9vwlh\" (UID: \"c1984c03-0869-4331-a84e-10305a971a43\") " pod="kube-system/coredns-674b8bbfcf-9vwlh" Jul 1 08:45:13.501286 kubelet[2783]: I0701 08:45:13.500517 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpfw\" (UniqueName: \"kubernetes.io/projected/f6b14933-5394-43aa-8cf9-27f4bd274718-kube-api-access-djpfw\") pod \"calico-apiserver-5d4b78f4d8-gwfrj\" (UID: \"f6b14933-5394-43aa-8cf9-27f4bd274718\") " pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" Jul 1 08:45:13.501286 kubelet[2783]: I0701 08:45:13.500543 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfd49\" (UniqueName: \"kubernetes.io/projected/39668e56-eacf-4877-9ea0-0f50aa91c90a-kube-api-access-rfd49\") pod \"calico-apiserver-5d4b78f4d8-8h9b6\" (UID: \"39668e56-eacf-4877-9ea0-0f50aa91c90a\") " pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:13.501286 kubelet[2783]: I0701 08:45:13.500569 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6b14933-5394-43aa-8cf9-27f4bd274718-calico-apiserver-certs\") pod \"calico-apiserver-5d4b78f4d8-gwfrj\" (UID: \"f6b14933-5394-43aa-8cf9-27f4bd274718\") " pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" Jul 1 08:45:13.501506 kubelet[2783]: I0701 08:45:13.500593 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-ca-bundle\") pod \"whisker-5744ddf9d7-8pdnz\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " pod="calico-system/whisker-5744ddf9d7-8pdnz" Jul 1 08:45:13.501506 kubelet[2783]: I0701 08:45:13.500618 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd0b8d7d-cab8-4324-84d3-f9b60106f80e-tigera-ca-bundle\") pod \"calico-kube-controllers-78898dc79-xzxvq\" (UID: \"bd0b8d7d-cab8-4324-84d3-f9b60106f80e\") " pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:13.501506 kubelet[2783]: I0701 08:45:13.500638 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrg8\" (UniqueName: \"kubernetes.io/projected/bd0b8d7d-cab8-4324-84d3-f9b60106f80e-kube-api-access-hlrg8\") pod \"calico-kube-controllers-78898dc79-xzxvq\" (UID: \"bd0b8d7d-cab8-4324-84d3-f9b60106f80e\") " pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:13.501506 kubelet[2783]: I0701 08:45:13.500668 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879479c0-ad20-4f01-ad04-7c7296882080-config\") pod \"goldmane-768f4c5c69-jm5hg\" (UID: \"879479c0-ad20-4f01-ad04-7c7296882080\") " pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.501506 kubelet[2783]: I0701 08:45:13.500691 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/879479c0-ad20-4f01-ad04-7c7296882080-goldmane-key-pair\") pod \"goldmane-768f4c5c69-jm5hg\" (UID: \"879479c0-ad20-4f01-ad04-7c7296882080\") " pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.501704 kubelet[2783]: I0701 08:45:13.500714 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-backend-key-pair\") pod \"whisker-5744ddf9d7-8pdnz\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " pod="calico-system/whisker-5744ddf9d7-8pdnz" Jul 1 08:45:13.501704 kubelet[2783]: I0701 08:45:13.500735 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65988\" (UniqueName: \"kubernetes.io/projected/f2dbd54a-1f36-4785-8095-5a4c24a539ed-kube-api-access-65988\") pod \"whisker-5744ddf9d7-8pdnz\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " pod="calico-system/whisker-5744ddf9d7-8pdnz" Jul 1 08:45:13.501704 kubelet[2783]: I0701 08:45:13.500762 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e68482e-ac4f-44ab-b782-a089c10516f3-config-volume\") pod \"coredns-674b8bbfcf-r4llv\" (UID: \"1e68482e-ac4f-44ab-b782-a089c10516f3\") " pod="kube-system/coredns-674b8bbfcf-r4llv" Jul 1 08:45:13.501704 kubelet[2783]: I0701 08:45:13.500792 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4s6c\" (UniqueName: \"kubernetes.io/projected/1e68482e-ac4f-44ab-b782-a089c10516f3-kube-api-access-x4s6c\") pod \"coredns-674b8bbfcf-r4llv\" (UID: \"1e68482e-ac4f-44ab-b782-a089c10516f3\") " pod="kube-system/coredns-674b8bbfcf-r4llv" Jul 1 08:45:13.501704 kubelet[2783]: I0701 08:45:13.500812 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39668e56-eacf-4877-9ea0-0f50aa91c90a-calico-apiserver-certs\") pod \"calico-apiserver-5d4b78f4d8-8h9b6\" (UID: \"39668e56-eacf-4877-9ea0-0f50aa91c90a\") " pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:13.501907 kubelet[2783]: I0701 08:45:13.500838 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1984c03-0869-4331-a84e-10305a971a43-config-volume\") pod \"coredns-674b8bbfcf-9vwlh\" (UID: \"c1984c03-0869-4331-a84e-10305a971a43\") " pod="kube-system/coredns-674b8bbfcf-9vwlh" Jul 1 08:45:13.506187 systemd[1]: Created slice kubepods-besteffort-podbd0b8d7d_cab8_4324_84d3_f9b60106f80e.slice - libcontainer container kubepods-besteffort-podbd0b8d7d_cab8_4324_84d3_f9b60106f80e.slice. Jul 1 08:45:13.514917 systemd[1]: Created slice kubepods-burstable-podc1984c03_0869_4331_a84e_10305a971a43.slice - libcontainer container kubepods-burstable-podc1984c03_0869_4331_a84e_10305a971a43.slice. Jul 1 08:45:13.766805 containerd[1589]: time="2025-07-01T08:45:13.766646130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-gwfrj,Uid:f6b14933-5394-43aa-8cf9-27f4bd274718,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:45:13.779865 containerd[1589]: time="2025-07-01T08:45:13.779801112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:45:13.786964 containerd[1589]: time="2025-07-01T08:45:13.786906099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5744ddf9d7-8pdnz,Uid:f2dbd54a-1f36-4785-8095-5a4c24a539ed,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:13.793140 containerd[1589]: time="2025-07-01T08:45:13.793089506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jm5hg,Uid:879479c0-ad20-4f01-ad04-7c7296882080,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:13.803812 kubelet[2783]: E0701 08:45:13.803571 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:13.804308 containerd[1589]: time="2025-07-01T08:45:13.804278388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r4llv,Uid:1e68482e-ac4f-44ab-b782-a089c10516f3,Namespace:kube-system,Attempt:0,}" Jul 1 08:45:13.813426 containerd[1589]: time="2025-07-01T08:45:13.813239058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:13.819015 kubelet[2783]: E0701 08:45:13.818489 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:13.822725 containerd[1589]: time="2025-07-01T08:45:13.822681231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9vwlh,Uid:c1984c03-0869-4331-a84e-10305a971a43,Namespace:kube-system,Attempt:0,}" Jul 1 08:45:13.897536 containerd[1589]: time="2025-07-01T08:45:13.897473782Z" level=error msg="Failed to destroy network for sandbox \"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.926076 containerd[1589]: time="2025-07-01T08:45:13.916238403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5744ddf9d7-8pdnz,Uid:f2dbd54a-1f36-4785-8095-5a4c24a539ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.926443 containerd[1589]: time="2025-07-01T08:45:13.923636962Z" level=error msg="Failed to destroy network for sandbox \"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.927560 containerd[1589]: time="2025-07-01T08:45:13.927523537Z" level=error msg="Failed to destroy network for sandbox \"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.928662 containerd[1589]: time="2025-07-01T08:45:13.928628170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.929863 containerd[1589]: time="2025-07-01T08:45:13.929828673Z" level=error msg="Failed to destroy network for sandbox \"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.932084 containerd[1589]: time="2025-07-01T08:45:13.931876307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jm5hg,Uid:879479c0-ad20-4f01-ad04-7c7296882080,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.935296 containerd[1589]: time="2025-07-01T08:45:13.935269116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r4llv,Uid:1e68482e-ac4f-44ab-b782-a089c10516f3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.936473 containerd[1589]: time="2025-07-01T08:45:13.936416710Z" level=error msg="Failed to destroy network for sandbox \"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.938242 containerd[1589]: time="2025-07-01T08:45:13.938133653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-gwfrj,Uid:f6b14933-5394-43aa-8cf9-27f4bd274718,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.939131 containerd[1589]: time="2025-07-01T08:45:13.939093814Z" level=error msg="Failed to destroy network for sandbox \"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.939528 kubelet[2783]: E0701 08:45:13.939479 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.939603 kubelet[2783]: E0701 08:45:13.939561 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.939603 kubelet[2783]: E0701 08:45:13.939567 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:13.939603 kubelet[2783]: E0701 08:45:13.939586 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" Jul 1 08:45:13.939772 kubelet[2783]: E0701 08:45:13.939603 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" Jul 1 08:45:13.939772 kubelet[2783]: E0701 08:45:13.939597 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:13.939772 kubelet[2783]: E0701 08:45:13.939477 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.939772 kubelet[2783]: E0701 08:45:13.939649 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.939907 kubelet[2783]: E0701 08:45:13.939660 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jm5hg" Jul 1 08:45:13.939907 kubelet[2783]: E0701 08:45:13.939654 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4b78f4d8-gwfrj_calico-apiserver(f6b14933-5394-43aa-8cf9-27f4bd274718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4b78f4d8-gwfrj_calico-apiserver(f6b14933-5394-43aa-8cf9-27f4bd274718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02585a85e9ab243931a7063a49d836047d6c02ecd70a670c3d478841c2abf88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" podUID="f6b14933-5394-43aa-8cf9-27f4bd274718" Jul 1 08:45:13.939977 kubelet[2783]: E0701 08:45:13.939688 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4b78f4d8-8h9b6_calico-apiserver(39668e56-eacf-4877-9ea0-0f50aa91c90a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4b78f4d8-8h9b6_calico-apiserver(39668e56-eacf-4877-9ea0-0f50aa91c90a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f06935f4b486b01e70066bdfbc8ca63b6dd02b846d0c564d2066836e0098de7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" podUID="39668e56-eacf-4877-9ea0-0f50aa91c90a" Jul 1 08:45:13.939977 kubelet[2783]: E0701 08:45:13.939685 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-jm5hg_calico-system(879479c0-ad20-4f01-ad04-7c7296882080)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-jm5hg_calico-system(879479c0-ad20-4f01-ad04-7c7296882080)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"151d8a38134e033b0b9797a252cf44f4a21c5d8be8a62688d03fd84c9dad0819\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-jm5hg" podUID="879479c0-ad20-4f01-ad04-7c7296882080" Jul 1 08:45:13.939977 kubelet[2783]: E0701 08:45:13.939722 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.940079 kubelet[2783]: E0701 08:45:13.939756 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5744ddf9d7-8pdnz" Jul 1 08:45:13.940079 kubelet[2783]: E0701 08:45:13.939770 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5744ddf9d7-8pdnz" Jul 1 08:45:13.940079 kubelet[2783]: E0701 08:45:13.939801 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5744ddf9d7-8pdnz_calico-system(f2dbd54a-1f36-4785-8095-5a4c24a539ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5744ddf9d7-8pdnz_calico-system(f2dbd54a-1f36-4785-8095-5a4c24a539ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3791565f63d98490e26658c1119bfacce298de694dee54697761b4ee6f84445\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5744ddf9d7-8pdnz" podUID="f2dbd54a-1f36-4785-8095-5a4c24a539ed" Jul 1 08:45:13.940184 kubelet[2783]: E0701 08:45:13.939966 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.940184 kubelet[2783]: E0701 08:45:13.940002 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r4llv" Jul 1 08:45:13.940184 kubelet[2783]: E0701 08:45:13.940016 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r4llv" Jul 1 08:45:13.940263 kubelet[2783]: E0701 08:45:13.940067 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r4llv_kube-system(1e68482e-ac4f-44ab-b782-a089c10516f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r4llv_kube-system(1e68482e-ac4f-44ab-b782-a089c10516f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf9628d559f07bcf19ea43887ad2969645df81eb04a374d4ca5bf047408b563\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r4llv" podUID="1e68482e-ac4f-44ab-b782-a089c10516f3" Jul 1 08:45:13.940702 containerd[1589]: time="2025-07-01T08:45:13.940655066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.940840 kubelet[2783]: E0701 08:45:13.940813 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.940876 kubelet[2783]: E0701 08:45:13.940844 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:13.940876 kubelet[2783]: E0701 08:45:13.940858 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:13.940934 kubelet[2783]: E0701 08:45:13.940883 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78898dc79-xzxvq_calico-system(bd0b8d7d-cab8-4324-84d3-f9b60106f80e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78898dc79-xzxvq_calico-system(bd0b8d7d-cab8-4324-84d3-f9b60106f80e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd40202e534f298a81b0032d9890d94ca91acc6046d016e443fd721e4374bf2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" podUID="bd0b8d7d-cab8-4324-84d3-f9b60106f80e" Jul 1 08:45:13.948206 containerd[1589]: time="2025-07-01T08:45:13.948122252Z" level=error msg="Failed to destroy network for sandbox \"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.949555 containerd[1589]: time="2025-07-01T08:45:13.949512451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9vwlh,Uid:c1984c03-0869-4331-a84e-10305a971a43,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.949738 kubelet[2783]: E0701 08:45:13.949703 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:13.949813 kubelet[2783]: E0701 08:45:13.949751 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9vwlh" Jul 1 08:45:13.949813 kubelet[2783]: E0701 08:45:13.949772 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9vwlh" Jul 1 08:45:13.949885 kubelet[2783]: E0701 08:45:13.949817 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9vwlh_kube-system(c1984c03-0869-4331-a84e-10305a971a43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9vwlh_kube-system(c1984c03-0869-4331-a84e-10305a971a43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09eae8372a113d0e3441c29870be4185f28ec8b87a4c1c595000e6a1e83bd8d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9vwlh" podUID="c1984c03-0869-4331-a84e-10305a971a43" Jul 1 08:45:14.059746 systemd[1]: run-netns-cni\x2d5ae2cdbc\x2da3eb\x2de965\x2d0675\x2d460f6330fa64.mount: Deactivated successfully. Jul 1 08:45:14.168608 systemd[1]: Created slice kubepods-besteffort-podcea5ec18_e730_41e6_b2b5_7746f9389260.slice - libcontainer container kubepods-besteffort-podcea5ec18_e730_41e6_b2b5_7746f9389260.slice. Jul 1 08:45:14.171146 containerd[1589]: time="2025-07-01T08:45:14.171112787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5dkh,Uid:cea5ec18-e730-41e6-b2b5-7746f9389260,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:14.221687 containerd[1589]: time="2025-07-01T08:45:14.221619083Z" level=error msg="Failed to destroy network for sandbox \"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:14.223313 containerd[1589]: time="2025-07-01T08:45:14.223279509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5dkh,Uid:cea5ec18-e730-41e6-b2b5-7746f9389260,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:14.223566 kubelet[2783]: E0701 08:45:14.223506 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:14.223645 kubelet[2783]: E0701 08:45:14.223591 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:14.223675 kubelet[2783]: E0701 08:45:14.223653 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z5dkh" Jul 1 08:45:14.223789 kubelet[2783]: E0701 08:45:14.223742 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z5dkh_calico-system(cea5ec18-e730-41e6-b2b5-7746f9389260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z5dkh_calico-system(cea5ec18-e730-41e6-b2b5-7746f9389260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"239685739a3324d774dc3632a77aa583803faf4076ee20e9676b84eb47f15d9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z5dkh" podUID="cea5ec18-e730-41e6-b2b5-7746f9389260" Jul 1 08:45:14.224884 systemd[1]: run-netns-cni\x2d3b1f215b\x2d8f41\x2d89cc\x2d4802\x2dae1312991ba6.mount: Deactivated successfully. Jul 1 08:45:14.266657 containerd[1589]: time="2025-07-01T08:45:14.266616772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 1 08:45:15.470518 kubelet[2783]: I0701 08:45:15.470425 2783 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:45:15.470996 kubelet[2783]: E0701 08:45:15.470827 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:16.269400 kubelet[2783]: E0701 08:45:16.269361 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:21.884201 systemd[1]: Started sshd@7-10.0.0.127:22-10.0.0.1:47158.service - OpenSSH per-connection server daemon (10.0.0.1:47158). Jul 1 08:45:21.954119 sshd[3946]: Accepted publickey for core from 10.0.0.1 port 47158 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:21.956938 sshd-session[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:21.962529 systemd-logind[1566]: New session 8 of user core. Jul 1 08:45:21.970307 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 1 08:45:22.106322 sshd[3949]: Connection closed by 10.0.0.1 port 47158 Jul 1 08:45:22.106612 sshd-session[3946]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:22.110863 systemd[1]: sshd@7-10.0.0.127:22-10.0.0.1:47158.service: Deactivated successfully. Jul 1 08:45:22.113440 systemd[1]: session-8.scope: Deactivated successfully. Jul 1 08:45:22.115013 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Jul 1 08:45:22.116435 systemd-logind[1566]: Removed session 8. Jul 1 08:45:24.032642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3556066485.mount: Deactivated successfully. Jul 1 08:45:24.621752 containerd[1589]: time="2025-07-01T08:45:24.621690001Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:24.623157 containerd[1589]: time="2025-07-01T08:45:24.623137907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 1 08:45:24.624797 containerd[1589]: time="2025-07-01T08:45:24.624764399Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:24.626987 containerd[1589]: time="2025-07-01T08:45:24.626928649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:24.627489 containerd[1589]: time="2025-07-01T08:45:24.627440829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.360785706s" Jul 1 08:45:24.627489 containerd[1589]: time="2025-07-01T08:45:24.627486956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 1 08:45:24.658299 containerd[1589]: time="2025-07-01T08:45:24.658251471Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 1 08:45:24.670584 containerd[1589]: time="2025-07-01T08:45:24.670476284Z" level=info msg="Container ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:24.684160 containerd[1589]: time="2025-07-01T08:45:24.684105543Z" level=info msg="CreateContainer within sandbox \"c906ff5418823c2ba4dc8a957db10165d5b020235982786d5ae4db40add1c7b6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\"" Jul 1 08:45:24.684784 containerd[1589]: time="2025-07-01T08:45:24.684740514Z" level=info msg="StartContainer for \"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\"" Jul 1 08:45:24.686231 containerd[1589]: time="2025-07-01T08:45:24.686203509Z" level=info msg="connecting to shim ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0" address="unix:///run/containerd/s/700c49e6c54393dd19755fd8072d0514a2a0dd942b90198c8676a509fdc1b3cd" protocol=ttrpc version=3 Jul 1 08:45:24.717460 systemd[1]: Started cri-containerd-ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0.scope - libcontainer container ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0. Jul 1 08:45:25.167417 containerd[1589]: time="2025-07-01T08:45:25.167354053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:45:25.167821 containerd[1589]: time="2025-07-01T08:45:25.167776585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:25.168525 containerd[1589]: time="2025-07-01T08:45:25.168415514Z" level=info msg="StartContainer for \"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\" returns successfully" Jul 1 08:45:25.199404 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 1 08:45:25.200718 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 1 08:45:25.263139 containerd[1589]: time="2025-07-01T08:45:25.263074941Z" level=error msg="Failed to destroy network for sandbox \"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.265699 containerd[1589]: time="2025-07-01T08:45:25.265649400Z" level=error msg="Failed to destroy network for sandbox \"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.266226 containerd[1589]: time="2025-07-01T08:45:25.265783262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.266441 kubelet[2783]: E0701 08:45:25.266301 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.266441 kubelet[2783]: E0701 08:45:25.266390 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:25.266441 kubelet[2783]: E0701 08:45:25.266428 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" Jul 1 08:45:25.267308 kubelet[2783]: E0701 08:45:25.266488 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78898dc79-xzxvq_calico-system(bd0b8d7d-cab8-4324-84d3-f9b60106f80e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78898dc79-xzxvq_calico-system(bd0b8d7d-cab8-4324-84d3-f9b60106f80e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4eb994d2e745af66589361a8e7563ef667e016f5174b207c93f824abd117e1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" podUID="bd0b8d7d-cab8-4324-84d3-f9b60106f80e" Jul 1 08:45:25.267285 systemd[1]: run-netns-cni\x2d90d4ef45\x2df01e\x2dab49\x2d7089\x2d496862b604ce.mount: Deactivated successfully. Jul 1 08:45:25.269886 containerd[1589]: time="2025-07-01T08:45:25.269630378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.270410 kubelet[2783]: E0701 08:45:25.270321 2783 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:45:25.270410 kubelet[2783]: E0701 08:45:25.270374 2783 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:25.270410 kubelet[2783]: E0701 08:45:25.270394 2783 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" Jul 1 08:45:25.270683 kubelet[2783]: E0701 08:45:25.270431 2783 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4b78f4d8-8h9b6_calico-apiserver(39668e56-eacf-4877-9ea0-0f50aa91c90a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4b78f4d8-8h9b6_calico-apiserver(39668e56-eacf-4877-9ea0-0f50aa91c90a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b3dae291ee35e7251be3c1f12fed4727049a85c7704be9ded0cbb48acb21200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" podUID="39668e56-eacf-4877-9ea0-0f50aa91c90a" Jul 1 08:45:25.271110 systemd[1]: run-netns-cni\x2d556dd9ed\x2dc9f4\x2d289a\x2ddf42\x2daf449d81baf5.mount: Deactivated successfully. Jul 1 08:45:25.384402 kubelet[2783]: I0701 08:45:25.383822 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65988\" (UniqueName: \"kubernetes.io/projected/f2dbd54a-1f36-4785-8095-5a4c24a539ed-kube-api-access-65988\") pod \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " Jul 1 08:45:25.384886 kubelet[2783]: I0701 08:45:25.384674 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-backend-key-pair\") pod \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " Jul 1 08:45:25.386184 kubelet[2783]: I0701 08:45:25.385438 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2dbd54a-1f36-4785-8095-5a4c24a539ed" (UID: "f2dbd54a-1f36-4785-8095-5a4c24a539ed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 1 08:45:25.386404 kubelet[2783]: I0701 08:45:25.386389 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-ca-bundle\") pod \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\" (UID: \"f2dbd54a-1f36-4785-8095-5a4c24a539ed\") " Jul 1 08:45:25.387179 kubelet[2783]: I0701 08:45:25.386756 2783 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 1 08:45:25.391717 kubelet[2783]: I0701 08:45:25.391399 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2dbd54a-1f36-4785-8095-5a4c24a539ed-kube-api-access-65988" (OuterVolumeSpecName: "kube-api-access-65988") pod "f2dbd54a-1f36-4785-8095-5a4c24a539ed" (UID: "f2dbd54a-1f36-4785-8095-5a4c24a539ed"). InnerVolumeSpecName "kube-api-access-65988". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 1 08:45:25.391966 kubelet[2783]: I0701 08:45:25.391943 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2dbd54a-1f36-4785-8095-5a4c24a539ed" (UID: "f2dbd54a-1f36-4785-8095-5a4c24a539ed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 1 08:45:25.487629 kubelet[2783]: I0701 08:45:25.487499 2783 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2dbd54a-1f36-4785-8095-5a4c24a539ed-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 1 08:45:25.487854 kubelet[2783]: I0701 08:45:25.487832 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-65988\" (UniqueName: \"kubernetes.io/projected/f2dbd54a-1f36-4785-8095-5a4c24a539ed-kube-api-access-65988\") on node \"localhost\" DevicePath \"\"" Jul 1 08:45:25.510711 containerd[1589]: time="2025-07-01T08:45:25.510663915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\" id:\"f9f8ffd30879810cb3e6c23e5c3dfffaa2e93d69c4593fb7d6b02376d06834cb\" pid:4095 exit_status:1 exited_at:{seconds:1751359525 nanos:510092293}" Jul 1 08:45:26.032704 systemd[1]: var-lib-kubelet-pods-f2dbd54a\x2d1f36\x2d4785\x2d8095\x2d5a4c24a539ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65988.mount: Deactivated successfully. Jul 1 08:45:26.032833 systemd[1]: var-lib-kubelet-pods-f2dbd54a\x2d1f36\x2d4785\x2d8095\x2d5a4c24a539ed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 1 08:45:26.163589 containerd[1589]: time="2025-07-01T08:45:26.163536432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-gwfrj,Uid:f6b14933-5394-43aa-8cf9-27f4bd274718,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:45:26.310685 systemd[1]: Removed slice kubepods-besteffort-podf2dbd54a_1f36_4785_8095_5a4c24a539ed.slice - libcontainer container kubepods-besteffort-podf2dbd54a_1f36_4785_8095_5a4c24a539ed.slice. Jul 1 08:45:26.391453 containerd[1589]: time="2025-07-01T08:45:26.391395240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\" id:\"227c11f4e2ffa00397a5a1a8a1b2efdbe72a0a987f24d10421bec3febe2bec9c\" pid:4152 exit_status:1 exited_at:{seconds:1751359526 nanos:390965274}" Jul 1 08:45:26.617929 kubelet[2783]: I0701 08:45:26.617845 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kdmdh" podStartSLOduration=2.966937233 podStartE2EDuration="26.617824256s" podCreationTimestamp="2025-07-01 08:45:00 +0000 UTC" firstStartedPulling="2025-07-01 08:45:00.977351443 +0000 UTC m=+21.912604700" lastFinishedPulling="2025-07-01 08:45:24.628238456 +0000 UTC m=+45.563491723" observedRunningTime="2025-07-01 08:45:25.323871863 +0000 UTC m=+46.259125130" watchObservedRunningTime="2025-07-01 08:45:26.617824256 +0000 UTC m=+47.553077513" Jul 1 08:45:26.683915 systemd-networkd[1482]: calib2f7469fb11: Link UP Jul 1 08:45:26.684199 systemd-networkd[1482]: calib2f7469fb11: Gained carrier Jul 1 08:45:26.707791 containerd[1589]: 2025-07-01 08:45:26.194 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:45:26.707791 containerd[1589]: 2025-07-01 08:45:26.218 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0 calico-apiserver-5d4b78f4d8- calico-apiserver f6b14933-5394-43aa-8cf9-27f4bd274718 889 0 2025-07-01 08:44:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4b78f4d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d4b78f4d8-gwfrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2f7469fb11 [] [] }} ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-" Jul 1 08:45:26.707791 containerd[1589]: 2025-07-01 08:45:26.218 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.707791 containerd[1589]: 2025-07-01 08:45:26.345 [INFO][4133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" HandleID="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.346 [INFO][4133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" HandleID="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039dcc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d4b78f4d8-gwfrj", "timestamp":"2025-07-01 08:45:26.345050713 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.346 [INFO][4133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.346 [INFO][4133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.346 [INFO][4133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.434 [INFO][4133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" host="localhost" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.518 [INFO][4133] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.621 [INFO][4133] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.623 [INFO][4133] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.625 [INFO][4133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:26.708089 containerd[1589]: 2025-07-01 08:45:26.625 [INFO][4133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" host="localhost" Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.658 [INFO][4133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569 Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.663 [INFO][4133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" host="localhost" Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.670 [INFO][4133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" host="localhost" Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.670 [INFO][4133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" host="localhost" Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.670 [INFO][4133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:26.708481 containerd[1589]: 2025-07-01 08:45:26.670 [INFO][4133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" HandleID="k8s-pod-network.62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.708652 containerd[1589]: 2025-07-01 08:45:26.674 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0", GenerateName:"calico-apiserver-5d4b78f4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b14933-5394-43aa-8cf9-27f4bd274718", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4b78f4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d4b78f4d8-gwfrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2f7469fb11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:26.708740 containerd[1589]: 2025-07-01 08:45:26.674 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.708740 containerd[1589]: 2025-07-01 08:45:26.674 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2f7469fb11 ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.708740 containerd[1589]: 2025-07-01 08:45:26.685 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.708912 containerd[1589]: 2025-07-01 08:45:26.685 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0", GenerateName:"calico-apiserver-5d4b78f4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6b14933-5394-43aa-8cf9-27f4bd274718", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4b78f4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569", Pod:"calico-apiserver-5d4b78f4d8-gwfrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2f7469fb11", MAC:"56:86:10:2d:96:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:26.708991 containerd[1589]: 2025-07-01 08:45:26.697 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-gwfrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--gwfrj-eth0" Jul 1 08:45:26.719528 systemd[1]: Created slice kubepods-besteffort-pod2845dca7_5c23_4b7c_961c_21d0f3682988.slice - libcontainer container kubepods-besteffort-pod2845dca7_5c23_4b7c_961c_21d0f3682988.slice. Jul 1 08:45:26.798779 kubelet[2783]: I0701 08:45:26.798655 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2845dca7-5c23-4b7c-961c-21d0f3682988-whisker-ca-bundle\") pod \"whisker-84ff7c8cdf-hgbsf\" (UID: \"2845dca7-5c23-4b7c-961c-21d0f3682988\") " pod="calico-system/whisker-84ff7c8cdf-hgbsf" Jul 1 08:45:26.798779 kubelet[2783]: I0701 08:45:26.798720 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btp4f\" (UniqueName: \"kubernetes.io/projected/2845dca7-5c23-4b7c-961c-21d0f3682988-kube-api-access-btp4f\") pod \"whisker-84ff7c8cdf-hgbsf\" (UID: \"2845dca7-5c23-4b7c-961c-21d0f3682988\") " pod="calico-system/whisker-84ff7c8cdf-hgbsf" Jul 1 08:45:26.798779 kubelet[2783]: I0701 08:45:26.798746 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2845dca7-5c23-4b7c-961c-21d0f3682988-whisker-backend-key-pair\") pod \"whisker-84ff7c8cdf-hgbsf\" (UID: \"2845dca7-5c23-4b7c-961c-21d0f3682988\") " pod="calico-system/whisker-84ff7c8cdf-hgbsf" Jul 1 08:45:26.908327 containerd[1589]: time="2025-07-01T08:45:26.905493703Z" level=info msg="connecting to shim 62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569" address="unix:///run/containerd/s/fd04f07c19731b73b5ad025178575ace3f7da9e2005ebfda026a080371b62f4f" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:26.970565 systemd[1]: Started cri-containerd-62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569.scope - libcontainer container 62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569. Jul 1 08:45:26.988335 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:27.028120 containerd[1589]: time="2025-07-01T08:45:27.028065332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84ff7c8cdf-hgbsf,Uid:2845dca7-5c23-4b7c-961c-21d0f3682988,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:27.128969 systemd[1]: Started sshd@8-10.0.0.127:22-10.0.0.1:47170.service - OpenSSH per-connection server daemon (10.0.0.1:47170). Jul 1 08:45:27.163529 kubelet[2783]: E0701 08:45:27.163134 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:27.164543 containerd[1589]: time="2025-07-01T08:45:27.164466029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jm5hg,Uid:879479c0-ad20-4f01-ad04-7c7296882080,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:27.165205 containerd[1589]: time="2025-07-01T08:45:27.164961050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5dkh,Uid:cea5ec18-e730-41e6-b2b5-7746f9389260,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:27.165303 containerd[1589]: time="2025-07-01T08:45:27.165270904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r4llv,Uid:1e68482e-ac4f-44ab-b782-a089c10516f3,Namespace:kube-system,Attempt:0,}" Jul 1 08:45:27.166494 kubelet[2783]: I0701 08:45:27.166463 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2dbd54a-1f36-4785-8095-5a4c24a539ed" path="/var/lib/kubelet/pods/f2dbd54a-1f36-4785-8095-5a4c24a539ed/volumes" Jul 1 08:45:27.243147 containerd[1589]: time="2025-07-01T08:45:27.242911576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-gwfrj,Uid:f6b14933-5394-43aa-8cf9-27f4bd274718,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569\"" Jul 1 08:45:27.260962 containerd[1589]: time="2025-07-01T08:45:27.260861743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:45:27.343334 systemd-networkd[1482]: vxlan.calico: Link UP Jul 1 08:45:27.343799 systemd-networkd[1482]: vxlan.calico: Gained carrier Jul 1 08:45:27.391558 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 47170 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:27.393086 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:27.405370 systemd-logind[1566]: New session 9 of user core. Jul 1 08:45:27.410332 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 1 08:45:27.480124 systemd-networkd[1482]: calic06c38dac43: Link UP Jul 1 08:45:27.481333 systemd-networkd[1482]: calic06c38dac43: Gained carrier Jul 1 08:45:27.498244 containerd[1589]: 2025-07-01 08:45:27.373 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--z5dkh-eth0 csi-node-driver- calico-system cea5ec18-e730-41e6-b2b5-7746f9389260 769 0 2025-07-01 08:45:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-z5dkh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic06c38dac43 [] [] }} ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-" Jul 1 08:45:27.498244 containerd[1589]: 2025-07-01 08:45:27.373 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.498244 containerd[1589]: 2025-07-01 08:45:27.425 [INFO][4455] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" HandleID="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Workload="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.425 [INFO][4455] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" HandleID="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Workload="localhost-k8s-csi--node--driver--z5dkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001384f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-z5dkh", "timestamp":"2025-07-01 08:45:27.425749614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.425 [INFO][4455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.425 [INFO][4455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.426 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.436 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" host="localhost" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.442 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.447 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.449 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.451 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.498566 containerd[1589]: 2025-07-01 08:45:27.451 [INFO][4455] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" host="localhost" Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.453 [INFO][4455] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.457 [INFO][4455] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" host="localhost" Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.464 [INFO][4455] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" host="localhost" Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.464 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" host="localhost" Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.467 [INFO][4455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:27.498797 containerd[1589]: 2025-07-01 08:45:27.467 [INFO][4455] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" HandleID="k8s-pod-network.366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Workload="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.498950 containerd[1589]: 2025-07-01 08:45:27.471 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z5dkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea5ec18-e730-41e6-b2b5-7746f9389260", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-z5dkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic06c38dac43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.499026 containerd[1589]: 2025-07-01 08:45:27.471 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.499026 containerd[1589]: 2025-07-01 08:45:27.471 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic06c38dac43 ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.499026 containerd[1589]: 2025-07-01 08:45:27.482 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.499098 containerd[1589]: 2025-07-01 08:45:27.483 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z5dkh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cea5ec18-e730-41e6-b2b5-7746f9389260", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad", Pod:"csi-node-driver-z5dkh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic06c38dac43", MAC:"96:c1:a3:43:fd:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.499218 containerd[1589]: 2025-07-01 08:45:27.492 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" Namespace="calico-system" Pod="csi-node-driver-z5dkh" WorkloadEndpoint="localhost-k8s-csi--node--driver--z5dkh-eth0" Jul 1 08:45:27.544723 containerd[1589]: time="2025-07-01T08:45:27.544671121Z" level=info msg="connecting to shim 366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad" address="unix:///run/containerd/s/b7a0ad0227c65c2609fab44dfe333572b858066282cd3ba12facae6e9e480b8e" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:27.598364 systemd-networkd[1482]: calica7bfe0df47: Link UP Jul 1 08:45:27.599514 systemd-networkd[1482]: calica7bfe0df47: Gained carrier Jul 1 08:45:27.614616 sshd[4478]: Connection closed by 10.0.0.1 port 47170 Jul 1 08:45:27.616220 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:27.623234 systemd[1]: sshd@8-10.0.0.127:22-10.0.0.1:47170.service: Deactivated successfully. Jul 1 08:45:27.626518 systemd[1]: session-9.scope: Deactivated successfully. Jul 1 08:45:27.630399 containerd[1589]: 2025-07-01 08:45:27.352 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--r4llv-eth0 coredns-674b8bbfcf- kube-system 1e68482e-ac4f-44ab-b782-a089c10516f3 895 0 2025-07-01 08:44:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-r4llv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica7bfe0df47 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-" Jul 1 08:45:27.630399 containerd[1589]: 2025-07-01 08:45:27.356 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.630399 containerd[1589]: 2025-07-01 08:45:27.426 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" HandleID="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Workload="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.426 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" HandleID="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Workload="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-r4llv", "timestamp":"2025-07-01 08:45:27.424033884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.426 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.464 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.464 [INFO][4445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.541 [INFO][4445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" host="localhost" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.551 [INFO][4445] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.561 [INFO][4445] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.564 [INFO][4445] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.567 [INFO][4445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.630587 containerd[1589]: 2025-07-01 08:45:27.567 [INFO][4445] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" host="localhost" Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.570 [INFO][4445] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943 Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.576 [INFO][4445] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" host="localhost" Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4445] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" host="localhost" Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" host="localhost" Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:27.630798 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" HandleID="k8s-pod-network.4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Workload="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.630915 containerd[1589]: 2025-07-01 08:45:27.590 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r4llv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1e68482e-ac4f-44ab-b782-a089c10516f3", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-r4llv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica7bfe0df47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.630841 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Jul 1 08:45:27.631047 containerd[1589]: 2025-07-01 08:45:27.590 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.631047 containerd[1589]: 2025-07-01 08:45:27.590 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica7bfe0df47 ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.631047 containerd[1589]: 2025-07-01 08:45:27.609 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.631120 containerd[1589]: 2025-07-01 08:45:27.610 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--r4llv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1e68482e-ac4f-44ab-b782-a089c10516f3", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943", Pod:"coredns-674b8bbfcf-r4llv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica7bfe0df47", MAC:"92:cb:bf:f6:d2:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.631120 containerd[1589]: 2025-07-01 08:45:27.626 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" Namespace="kube-system" Pod="coredns-674b8bbfcf-r4llv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--r4llv-eth0" Jul 1 08:45:27.643424 systemd[1]: Started cri-containerd-366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad.scope - libcontainer container 366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad. Jul 1 08:45:27.645327 systemd-logind[1566]: Removed session 9. Jul 1 08:45:27.659867 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:27.664030 containerd[1589]: time="2025-07-01T08:45:27.663946464Z" level=info msg="connecting to shim 4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943" address="unix:///run/containerd/s/35390ddf275007b0d9c65bfca7b53f8ff9bed90846d3d79bb8b83f9697403d0a" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:27.688580 containerd[1589]: time="2025-07-01T08:45:27.688419612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z5dkh,Uid:cea5ec18-e730-41e6-b2b5-7746f9389260,Namespace:calico-system,Attempt:0,} returns sandbox id \"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad\"" Jul 1 08:45:27.700544 systemd[1]: Started cri-containerd-4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943.scope - libcontainer container 4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943. Jul 1 08:45:27.705978 systemd-networkd[1482]: cali666d00610b9: Link UP Jul 1 08:45:27.710479 systemd-networkd[1482]: cali666d00610b9: Gained carrier Jul 1 08:45:27.720990 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.369 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0 goldmane-768f4c5c69- calico-system 879479c0-ad20-4f01-ad04-7c7296882080 898 0 2025-07-01 08:45:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-jm5hg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali666d00610b9 [] [] }} ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.369 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.433 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" HandleID="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Workload="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.433 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" HandleID="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Workload="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-jm5hg", "timestamp":"2025-07-01 08:45:27.43315085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.434 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.582 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.638 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.653 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.666 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.668 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.673 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.673 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.675 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79 Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.681 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" host="localhost" Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:27.726770 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" HandleID="k8s-pod-network.38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Workload="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.702 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"879479c0-ad20-4f01-ad04-7c7296882080", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-jm5hg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali666d00610b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.702 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.703 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali666d00610b9 ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.711 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.711 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"879479c0-ad20-4f01-ad04-7c7296882080", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79", Pod:"goldmane-768f4c5c69-jm5hg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali666d00610b9", MAC:"ca:82:23:be:ab:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.727488 containerd[1589]: 2025-07-01 08:45:27.722 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" Namespace="calico-system" Pod="goldmane-768f4c5c69-jm5hg" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jm5hg-eth0" Jul 1 08:45:27.762497 containerd[1589]: time="2025-07-01T08:45:27.762342123Z" level=info msg="connecting to shim 38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79" address="unix:///run/containerd/s/8db5896d55557ef365ed5f8a2cfab6323d28948587daf62400b1c39708f530be" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:27.782683 containerd[1589]: time="2025-07-01T08:45:27.782559269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r4llv,Uid:1e68482e-ac4f-44ab-b782-a089c10516f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943\"" Jul 1 08:45:27.784256 kubelet[2783]: E0701 08:45:27.783803 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:27.795221 containerd[1589]: time="2025-07-01T08:45:27.795150646Z" level=info msg="CreateContainer within sandbox \"4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:45:27.803505 systemd[1]: Started cri-containerd-38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79.scope - libcontainer container 38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79. Jul 1 08:45:27.815505 systemd-networkd[1482]: cali157edbfba74: Link UP Jul 1 08:45:27.817368 systemd-networkd[1482]: cali157edbfba74: Gained carrier Jul 1 08:45:27.824928 containerd[1589]: time="2025-07-01T08:45:27.824869842Z" level=info msg="Container 6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:27.828975 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:27.837876 containerd[1589]: time="2025-07-01T08:45:27.837749822Z" level=info msg="CreateContainer within sandbox \"4ee0a60f609d6dae42c8d30004ce00067ff4822aef30d18e9345e6e4cd934943\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33\"" Jul 1 08:45:27.838798 containerd[1589]: time="2025-07-01T08:45:27.838741299Z" level=info msg="StartContainer for \"6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33\"" Jul 1 08:45:27.840180 containerd[1589]: time="2025-07-01T08:45:27.840142387Z" level=info msg="connecting to shim 6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33" address="unix:///run/containerd/s/35390ddf275007b0d9c65bfca7b53f8ff9bed90846d3d79bb8b83f9697403d0a" protocol=ttrpc version=3 Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.399 [INFO][4372] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0 whisker-84ff7c8cdf- calico-system 2845dca7-5c23-4b7c-961c-21d0f3682988 1056 0 2025-07-01 08:45:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84ff7c8cdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84ff7c8cdf-hgbsf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali157edbfba74 [] [] }} ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.399 [INFO][4372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.440 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" HandleID="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Workload="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.441 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" HandleID="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Workload="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84ff7c8cdf-hgbsf", "timestamp":"2025-07-01 08:45:27.440670317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.441 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.692 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.739 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.753 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.765 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.767 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.769 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.770 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.771 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.776 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.788 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.792 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" host="localhost" Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.792 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:27.850663 containerd[1589]: 2025-07-01 08:45:27.792 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" HandleID="k8s-pod-network.2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Workload="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.805 [INFO][4372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0", GenerateName:"whisker-84ff7c8cdf-", Namespace:"calico-system", SelfLink:"", UID:"2845dca7-5c23-4b7c-961c-21d0f3682988", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84ff7c8cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84ff7c8cdf-hgbsf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali157edbfba74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.805 [INFO][4372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.805 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali157edbfba74 ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.820 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.822 [INFO][4372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0", GenerateName:"whisker-84ff7c8cdf-", Namespace:"calico-system", SelfLink:"", UID:"2845dca7-5c23-4b7c-961c-21d0f3682988", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84ff7c8cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef", Pod:"whisker-84ff7c8cdf-hgbsf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali157edbfba74", MAC:"4e:ac:f9:46:9b:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:27.853855 containerd[1589]: 2025-07-01 08:45:27.834 [INFO][4372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" Namespace="calico-system" Pod="whisker-84ff7c8cdf-hgbsf" WorkloadEndpoint="localhost-k8s-whisker--84ff7c8cdf--hgbsf-eth0" Jul 1 08:45:27.870469 systemd[1]: Started cri-containerd-6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33.scope - libcontainer container 6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33. Jul 1 08:45:27.885426 containerd[1589]: time="2025-07-01T08:45:27.885353391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jm5hg,Uid:879479c0-ad20-4f01-ad04-7c7296882080,Namespace:calico-system,Attempt:0,} returns sandbox id \"38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79\"" Jul 1 08:45:27.896534 containerd[1589]: time="2025-07-01T08:45:27.896475121Z" level=info msg="connecting to shim 2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef" address="unix:///run/containerd/s/3442020d92344c9a1e20fab21a8fe7268a171937dae8b90d875428d10d0ac36a" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:27.920535 containerd[1589]: time="2025-07-01T08:45:27.920497190Z" level=info msg="StartContainer for \"6e202d8b0a11edbf9c833790b24c553232bbf23d9ca60f92ea8c2882cb7a6c33\" returns successfully" Jul 1 08:45:27.937710 systemd[1]: Started cri-containerd-2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef.scope - libcontainer container 2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef. Jul 1 08:45:27.959141 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:28.002070 containerd[1589]: time="2025-07-01T08:45:28.001935935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84ff7c8cdf-hgbsf,Uid:2845dca7-5c23-4b7c-961c-21d0f3682988,Namespace:calico-system,Attempt:0,} returns sandbox id \"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef\"" Jul 1 08:45:28.318468 kubelet[2783]: E0701 08:45:28.318420 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:28.331731 kubelet[2783]: I0701 08:45:28.331651 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r4llv" podStartSLOduration=44.331631953 podStartE2EDuration="44.331631953s" podCreationTimestamp="2025-07-01 08:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:45:28.33130014 +0000 UTC m=+49.266553407" watchObservedRunningTime="2025-07-01 08:45:28.331631953 +0000 UTC m=+49.266885200" Jul 1 08:45:28.433379 systemd-networkd[1482]: calib2f7469fb11: Gained IPv6LL Jul 1 08:45:28.817353 systemd-networkd[1482]: vxlan.calico: Gained IPv6LL Jul 1 08:45:28.881377 systemd-networkd[1482]: cali157edbfba74: Gained IPv6LL Jul 1 08:45:29.009392 systemd-networkd[1482]: calic06c38dac43: Gained IPv6LL Jul 1 08:45:29.168011 kubelet[2783]: E0701 08:45:29.167964 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:29.168558 containerd[1589]: time="2025-07-01T08:45:29.168400230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9vwlh,Uid:c1984c03-0869-4331-a84e-10305a971a43,Namespace:kube-system,Attempt:0,}" Jul 1 08:45:29.265495 systemd-networkd[1482]: cali666d00610b9: Gained IPv6LL Jul 1 08:45:29.283962 systemd-networkd[1482]: cali897d7b78d31: Link UP Jul 1 08:45:29.284947 systemd-networkd[1482]: cali897d7b78d31: Gained carrier Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.210 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0 coredns-674b8bbfcf- kube-system c1984c03-0869-4331-a84e-10305a971a43 896 0 2025-07-01 08:44:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9vwlh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali897d7b78d31 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.210 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.236 [INFO][4817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" HandleID="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Workload="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.237 [INFO][4817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" HandleID="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Workload="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9vwlh", "timestamp":"2025-07-01 08:45:29.236897933 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.237 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.237 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.237 [INFO][4817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.245 [INFO][4817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.252 [INFO][4817] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.260 [INFO][4817] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.262 [INFO][4817] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.264 [INFO][4817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.264 [INFO][4817] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.266 [INFO][4817] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0 Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.271 [INFO][4817] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.277 [INFO][4817] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.277 [INFO][4817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" host="localhost" Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.277 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:29.301441 containerd[1589]: 2025-07-01 08:45:29.277 [INFO][4817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" HandleID="k8s-pod-network.1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Workload="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.281 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c1984c03-0869-4331-a84e-10305a971a43", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9vwlh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali897d7b78d31", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.281 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.281 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali897d7b78d31 ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.284 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.286 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c1984c03-0869-4331-a84e-10305a971a43", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0", Pod:"coredns-674b8bbfcf-9vwlh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali897d7b78d31", MAC:"6a:1c:86:1d:d1:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:29.302023 containerd[1589]: 2025-07-01 08:45:29.296 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" Namespace="kube-system" Pod="coredns-674b8bbfcf-9vwlh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9vwlh-eth0" Jul 1 08:45:29.337887 kubelet[2783]: E0701 08:45:29.337840 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:29.351084 containerd[1589]: time="2025-07-01T08:45:29.351017178Z" level=info msg="connecting to shim 1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0" address="unix:///run/containerd/s/4cc7b8edd8627338403b28de9449560a38fef2f71aa11388dfa1ef8114efbb50" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:29.380343 systemd[1]: Started cri-containerd-1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0.scope - libcontainer container 1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0. Jul 1 08:45:29.393417 systemd-networkd[1482]: calica7bfe0df47: Gained IPv6LL Jul 1 08:45:29.397835 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:29.429201 containerd[1589]: time="2025-07-01T08:45:29.429026254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9vwlh,Uid:c1984c03-0869-4331-a84e-10305a971a43,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0\"" Jul 1 08:45:29.430327 kubelet[2783]: E0701 08:45:29.430289 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:29.436160 containerd[1589]: time="2025-07-01T08:45:29.436097093Z" level=info msg="CreateContainer within sandbox \"1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:45:29.447382 containerd[1589]: time="2025-07-01T08:45:29.447335461Z" level=info msg="Container f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:29.454689 containerd[1589]: time="2025-07-01T08:45:29.454625284Z" level=info msg="CreateContainer within sandbox \"1e226c0a36867f6fae71596254a32b430804af356c8af0d9833eab704fea1bf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33\"" Jul 1 08:45:29.455477 containerd[1589]: time="2025-07-01T08:45:29.455261686Z" level=info msg="StartContainer for \"f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33\"" Jul 1 08:45:29.456311 containerd[1589]: time="2025-07-01T08:45:29.456279327Z" level=info msg="connecting to shim f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33" address="unix:///run/containerd/s/4cc7b8edd8627338403b28de9449560a38fef2f71aa11388dfa1ef8114efbb50" protocol=ttrpc version=3 Jul 1 08:45:29.479314 systemd[1]: Started cri-containerd-f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33.scope - libcontainer container f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33. Jul 1 08:45:29.513255 containerd[1589]: time="2025-07-01T08:45:29.512519377Z" level=info msg="StartContainer for \"f794f1b206cfaa3a130db4b798fe38bcac02c0342569bd308811eb5b0ef69d33\" returns successfully" Jul 1 08:45:30.411639 kubelet[2783]: E0701 08:45:30.411241 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:30.411639 kubelet[2783]: E0701 08:45:30.411459 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:31.057381 systemd-networkd[1482]: cali897d7b78d31: Gained IPv6LL Jul 1 08:45:31.342978 kubelet[2783]: E0701 08:45:31.342853 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:31.357919 kubelet[2783]: I0701 08:45:31.357745 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9vwlh" podStartSLOduration=47.357724044 podStartE2EDuration="47.357724044s" podCreationTimestamp="2025-07-01 08:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:45:30.435403034 +0000 UTC m=+51.370656311" watchObservedRunningTime="2025-07-01 08:45:31.357724044 +0000 UTC m=+52.292977301" Jul 1 08:45:31.380770 containerd[1589]: time="2025-07-01T08:45:31.380717884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:31.381495 containerd[1589]: time="2025-07-01T08:45:31.381456963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 1 08:45:31.389731 containerd[1589]: time="2025-07-01T08:45:31.389704362Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:31.392330 containerd[1589]: time="2025-07-01T08:45:31.392250823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:31.392973 containerd[1589]: time="2025-07-01T08:45:31.392938443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.131911158s" Jul 1 08:45:31.393030 containerd[1589]: time="2025-07-01T08:45:31.392973460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:45:31.395495 containerd[1589]: time="2025-07-01T08:45:31.395314636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 1 08:45:31.398565 containerd[1589]: time="2025-07-01T08:45:31.398538025Z" level=info msg="CreateContainer within sandbox \"62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:45:31.407471 containerd[1589]: time="2025-07-01T08:45:31.407422526Z" level=info msg="Container 6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:31.417376 containerd[1589]: time="2025-07-01T08:45:31.417339111Z" level=info msg="CreateContainer within sandbox \"62c4eb2f5544c2f09de7ac05d27a73ef78030fb6ad6e206ceeb987af612ff569\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e\"" Jul 1 08:45:31.417826 containerd[1589]: time="2025-07-01T08:45:31.417798269Z" level=info msg="StartContainer for \"6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e\"" Jul 1 08:45:31.418823 containerd[1589]: time="2025-07-01T08:45:31.418783423Z" level=info msg="connecting to shim 6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e" address="unix:///run/containerd/s/fd04f07c19731b73b5ad025178575ace3f7da9e2005ebfda026a080371b62f4f" protocol=ttrpc version=3 Jul 1 08:45:31.451413 systemd[1]: Started cri-containerd-6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e.scope - libcontainer container 6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e. Jul 1 08:45:31.503999 containerd[1589]: time="2025-07-01T08:45:31.503958870Z" level=info msg="StartContainer for \"6646d209073bc971cfd9deaae1ceb944628e4036ed71c49d7fe992bcd9a41f3e\" returns successfully" Jul 1 08:45:32.347087 kubelet[2783]: E0701 08:45:32.347033 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:32.356944 kubelet[2783]: I0701 08:45:32.356416 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-gwfrj" podStartSLOduration=31.220079011 podStartE2EDuration="35.356394545s" podCreationTimestamp="2025-07-01 08:44:57 +0000 UTC" firstStartedPulling="2025-07-01 08:45:27.258082029 +0000 UTC m=+48.193335286" lastFinishedPulling="2025-07-01 08:45:31.394397563 +0000 UTC m=+52.329650820" observedRunningTime="2025-07-01 08:45:32.356276146 +0000 UTC m=+53.291529403" watchObservedRunningTime="2025-07-01 08:45:32.356394545 +0000 UTC m=+53.291647802" Jul 1 08:45:32.632387 systemd[1]: Started sshd@9-10.0.0.127:22-10.0.0.1:58700.service - OpenSSH per-connection server daemon (10.0.0.1:58700). Jul 1 08:45:32.716776 sshd[4969]: Accepted publickey for core from 10.0.0.1 port 58700 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:32.718880 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:32.724347 systemd-logind[1566]: New session 10 of user core. Jul 1 08:45:32.735322 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 1 08:45:32.906540 sshd[4972]: Connection closed by 10.0.0.1 port 58700 Jul 1 08:45:32.907394 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:32.912732 systemd[1]: sshd@9-10.0.0.127:22-10.0.0.1:58700.service: Deactivated successfully. Jul 1 08:45:32.916297 systemd[1]: session-10.scope: Deactivated successfully. Jul 1 08:45:32.918492 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Jul 1 08:45:32.920377 systemd-logind[1566]: Removed session 10. Jul 1 08:45:33.352682 kubelet[2783]: E0701 08:45:33.352031 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:35.891993 containerd[1589]: time="2025-07-01T08:45:35.891918235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:35.892793 containerd[1589]: time="2025-07-01T08:45:35.892734317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 1 08:45:35.893804 containerd[1589]: time="2025-07-01T08:45:35.893768419Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:35.895793 containerd[1589]: time="2025-07-01T08:45:35.895762080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:35.898439 containerd[1589]: time="2025-07-01T08:45:35.898406355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 4.503061661s" Jul 1 08:45:35.898492 containerd[1589]: time="2025-07-01T08:45:35.898443958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 1 08:45:35.899583 containerd[1589]: time="2025-07-01T08:45:35.899305547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 1 08:45:35.902672 containerd[1589]: time="2025-07-01T08:45:35.902619011Z" level=info msg="CreateContainer within sandbox \"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 1 08:45:35.921003 containerd[1589]: time="2025-07-01T08:45:35.920942665Z" level=info msg="Container c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:35.932193 containerd[1589]: time="2025-07-01T08:45:35.932121231Z" level=info msg="CreateContainer within sandbox \"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370\"" Jul 1 08:45:35.932883 containerd[1589]: time="2025-07-01T08:45:35.932839816Z" level=info msg="StartContainer for \"c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370\"" Jul 1 08:45:35.934550 containerd[1589]: time="2025-07-01T08:45:35.934516186Z" level=info msg="connecting to shim c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370" address="unix:///run/containerd/s/b7a0ad0227c65c2609fab44dfe333572b858066282cd3ba12facae6e9e480b8e" protocol=ttrpc version=3 Jul 1 08:45:35.965380 systemd[1]: Started cri-containerd-c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370.scope - libcontainer container c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370. Jul 1 08:45:36.024092 containerd[1589]: time="2025-07-01T08:45:36.024051004Z" level=info msg="StartContainer for \"c0123344f2db351837c9f71988403ff0f9994090f2341a8ba5a0892584c0d370\" returns successfully" Jul 1 08:45:37.164316 containerd[1589]: time="2025-07-01T08:45:37.164217607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:45:37.166095 containerd[1589]: time="2025-07-01T08:45:37.165976520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,}" Jul 1 08:45:37.388775 systemd-networkd[1482]: cali1bc7289f506: Link UP Jul 1 08:45:37.390671 systemd-networkd[1482]: cali1bc7289f506: Gained carrier Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.237 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0 calico-apiserver-5d4b78f4d8- calico-apiserver 39668e56-eacf-4877-9ea0-0f50aa91c90a 893 0 2025-07-01 08:44:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4b78f4d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d4b78f4d8-8h9b6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1bc7289f506 [] [] }} ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.239 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.272 [INFO][5065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" HandleID="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.272 [INFO][5065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" HandleID="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f1560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d4b78f4d8-8h9b6", "timestamp":"2025-07-01 08:45:37.27218862 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.272 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.272 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.272 [INFO][5065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.281 [INFO][5065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.361 [INFO][5065] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.366 [INFO][5065] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.367 [INFO][5065] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.369 [INFO][5065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.369 [INFO][5065] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.371 [INFO][5065] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.374 [INFO][5065] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.381 [INFO][5065] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.382 [INFO][5065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" host="localhost" Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.382 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:37.406386 containerd[1589]: 2025-07-01 08:45:37.382 [INFO][5065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" HandleID="k8s-pod-network.bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Workload="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.384 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0", GenerateName:"calico-apiserver-5d4b78f4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"39668e56-eacf-4877-9ea0-0f50aa91c90a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4b78f4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d4b78f4d8-8h9b6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bc7289f506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.385 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.385 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bc7289f506 ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.391 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.392 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0", GenerateName:"calico-apiserver-5d4b78f4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"39668e56-eacf-4877-9ea0-0f50aa91c90a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 44, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4b78f4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de", Pod:"calico-apiserver-5d4b78f4d8-8h9b6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1bc7289f506", MAC:"4a:68:b6:57:29:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:37.407002 containerd[1589]: 2025-07-01 08:45:37.401 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" Namespace="calico-apiserver" Pod="calico-apiserver-5d4b78f4d8-8h9b6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d4b78f4d8--8h9b6-eth0" Jul 1 08:45:37.430522 containerd[1589]: time="2025-07-01T08:45:37.430398749Z" level=info msg="connecting to shim bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de" address="unix:///run/containerd/s/abbb052364da784ccd102a87d21287734288c702057f0c75fb2a5cb2f559bfe9" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:37.464443 systemd[1]: Started cri-containerd-bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de.scope - libcontainer container bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de. Jul 1 08:45:37.482928 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:37.501678 systemd-networkd[1482]: calib119818e271: Link UP Jul 1 08:45:37.502616 systemd-networkd[1482]: calib119818e271: Gained carrier Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.224 [INFO][5030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0 calico-kube-controllers-78898dc79- calico-system bd0b8d7d-cab8-4324-84d3-f9b60106f80e 894 0 2025-07-01 08:45:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78898dc79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78898dc79-xzxvq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib119818e271 [] [] }} ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.224 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.275 [INFO][5059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" HandleID="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Workload="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.276 [INFO][5059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" HandleID="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Workload="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78898dc79-xzxvq", "timestamp":"2025-07-01 08:45:37.275755222 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.276 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.382 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.382 [INFO][5059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.392 [INFO][5059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.463 [INFO][5059] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.469 [INFO][5059] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.472 [INFO][5059] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.475 [INFO][5059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.475 [INFO][5059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.477 [INFO][5059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.481 [INFO][5059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.492 [INFO][5059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.492 [INFO][5059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" host="localhost" Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.492 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:45:37.522961 containerd[1589]: 2025-07-01 08:45:37.492 [INFO][5059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" HandleID="k8s-pod-network.aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Workload="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.497 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0", GenerateName:"calico-kube-controllers-78898dc79-", Namespace:"calico-system", SelfLink:"", UID:"bd0b8d7d-cab8-4324-84d3-f9b60106f80e", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78898dc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78898dc79-xzxvq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib119818e271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.497 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.497 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib119818e271 ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.502 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.503 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0", GenerateName:"calico-kube-controllers-78898dc79-", Namespace:"calico-system", SelfLink:"", UID:"bd0b8d7d-cab8-4324-84d3-f9b60106f80e", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 45, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78898dc79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca", Pod:"calico-kube-controllers-78898dc79-xzxvq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib119818e271", MAC:"e6:53:0c:09:91:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:45:37.523539 containerd[1589]: 2025-07-01 08:45:37.516 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" Namespace="calico-system" Pod="calico-kube-controllers-78898dc79-xzxvq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78898dc79--xzxvq-eth0" Jul 1 08:45:37.528037 containerd[1589]: time="2025-07-01T08:45:37.528000474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4b78f4d8-8h9b6,Uid:39668e56-eacf-4877-9ea0-0f50aa91c90a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de\"" Jul 1 08:45:37.537661 containerd[1589]: time="2025-07-01T08:45:37.537612677Z" level=info msg="CreateContainer within sandbox \"bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:45:37.549078 containerd[1589]: time="2025-07-01T08:45:37.549012218Z" level=info msg="Container f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:37.565372 containerd[1589]: time="2025-07-01T08:45:37.565325713Z" level=info msg="CreateContainer within sandbox \"bee759d63bcf58d6255dba77927c26cf2fd7a7dd26ddcf2e83f7c75fa04ee3de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614\"" Jul 1 08:45:37.567369 containerd[1589]: time="2025-07-01T08:45:37.566220425Z" level=info msg="StartContainer for \"f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614\"" Jul 1 08:45:37.568020 containerd[1589]: time="2025-07-01T08:45:37.567965423Z" level=info msg="connecting to shim f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614" address="unix:///run/containerd/s/abbb052364da784ccd102a87d21287734288c702057f0c75fb2a5cb2f559bfe9" protocol=ttrpc version=3 Jul 1 08:45:37.569394 containerd[1589]: time="2025-07-01T08:45:37.569154380Z" level=info msg="connecting to shim aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca" address="unix:///run/containerd/s/b91d13ac7ba19ea33a2df07dd0065b6b876aabfd95097324f1f5e4bc7ec79ec8" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:45:37.587319 systemd[1]: Started cri-containerd-f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614.scope - libcontainer container f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614. Jul 1 08:45:37.590824 systemd[1]: Started cri-containerd-aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca.scope - libcontainer container aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca. Jul 1 08:45:37.606099 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:45:37.651442 containerd[1589]: time="2025-07-01T08:45:37.651356577Z" level=info msg="StartContainer for \"f992bf8d9daac184fcafaaf732ed21dee53ee4b475be6882b83e53e42d871614\" returns successfully" Jul 1 08:45:37.661216 containerd[1589]: time="2025-07-01T08:45:37.661020850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78898dc79-xzxvq,Uid:bd0b8d7d-cab8-4324-84d3-f9b60106f80e,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca\"" Jul 1 08:45:37.922046 systemd[1]: Started sshd@10-10.0.0.127:22-10.0.0.1:58716.service - OpenSSH per-connection server daemon (10.0.0.1:58716). Jul 1 08:45:37.996748 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 58716 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:37.999029 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:38.004272 systemd-logind[1566]: New session 11 of user core. Jul 1 08:45:38.012457 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 1 08:45:38.188075 sshd[5234]: Connection closed by 10.0.0.1 port 58716 Jul 1 08:45:38.188407 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:38.199755 systemd[1]: sshd@10-10.0.0.127:22-10.0.0.1:58716.service: Deactivated successfully. Jul 1 08:45:38.204439 systemd[1]: session-11.scope: Deactivated successfully. Jul 1 08:45:38.208539 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Jul 1 08:45:38.211656 systemd[1]: Started sshd@11-10.0.0.127:22-10.0.0.1:36824.service - OpenSSH per-connection server daemon (10.0.0.1:36824). Jul 1 08:45:38.213820 systemd-logind[1566]: Removed session 11. Jul 1 08:45:38.277353 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 36824 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:38.278740 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:38.284867 systemd-logind[1566]: New session 12 of user core. Jul 1 08:45:38.300443 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 1 08:45:38.606351 sshd[5251]: Connection closed by 10.0.0.1 port 36824 Jul 1 08:45:38.605340 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:38.615671 systemd[1]: sshd@11-10.0.0.127:22-10.0.0.1:36824.service: Deactivated successfully. Jul 1 08:45:38.618504 systemd[1]: session-12.scope: Deactivated successfully. Jul 1 08:45:38.620139 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Jul 1 08:45:38.626037 systemd[1]: Started sshd@12-10.0.0.127:22-10.0.0.1:36838.service - OpenSSH per-connection server daemon (10.0.0.1:36838). Jul 1 08:45:38.627215 systemd-logind[1566]: Removed session 12. Jul 1 08:45:38.673535 systemd-networkd[1482]: calib119818e271: Gained IPv6LL Jul 1 08:45:38.708140 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 36838 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:38.710132 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:38.717325 systemd-logind[1566]: New session 13 of user core. Jul 1 08:45:38.722352 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 1 08:45:38.966865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224454335.mount: Deactivated successfully. Jul 1 08:45:39.122372 systemd-networkd[1482]: cali1bc7289f506: Gained IPv6LL Jul 1 08:45:39.378211 kubelet[2783]: I0701 08:45:39.377951 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d4b78f4d8-8h9b6" podStartSLOduration=42.37792989 podStartE2EDuration="42.37792989s" podCreationTimestamp="2025-07-01 08:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:45:38.502472362 +0000 UTC m=+59.437725649" watchObservedRunningTime="2025-07-01 08:45:39.37792989 +0000 UTC m=+60.313183147" Jul 1 08:45:39.384023 sshd[5272]: Connection closed by 10.0.0.1 port 36838 Jul 1 08:45:39.384525 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:39.389366 systemd[1]: sshd@12-10.0.0.127:22-10.0.0.1:36838.service: Deactivated successfully. Jul 1 08:45:39.391513 systemd[1]: session-13.scope: Deactivated successfully. Jul 1 08:45:39.392465 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Jul 1 08:45:39.393782 systemd-logind[1566]: Removed session 13. Jul 1 08:45:40.274816 containerd[1589]: time="2025-07-01T08:45:40.274762987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:40.275897 containerd[1589]: time="2025-07-01T08:45:40.275836268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 1 08:45:40.276890 containerd[1589]: time="2025-07-01T08:45:40.276860324Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:40.286277 containerd[1589]: time="2025-07-01T08:45:40.286217732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:40.287301 containerd[1589]: time="2025-07-01T08:45:40.287272015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.387936311s" Jul 1 08:45:40.287374 containerd[1589]: time="2025-07-01T08:45:40.287307083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 1 08:45:40.288429 containerd[1589]: time="2025-07-01T08:45:40.288376887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 1 08:45:40.293748 containerd[1589]: time="2025-07-01T08:45:40.293705919Z" level=info msg="CreateContainer within sandbox \"38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 1 08:45:40.302927 containerd[1589]: time="2025-07-01T08:45:40.302885033Z" level=info msg="Container 70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:40.311526 containerd[1589]: time="2025-07-01T08:45:40.311485077Z" level=info msg="CreateContainer within sandbox \"38ea02d85b4bc1cb652c8ca6a682bff29e3cd3a7d7fb5065e28ed1be0b3f9e79\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\"" Jul 1 08:45:40.312009 containerd[1589]: time="2025-07-01T08:45:40.311986860Z" level=info msg="StartContainer for \"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\"" Jul 1 08:45:40.313239 containerd[1589]: time="2025-07-01T08:45:40.313205239Z" level=info msg="connecting to shim 70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8" address="unix:///run/containerd/s/8db5896d55557ef365ed5f8a2cfab6323d28948587daf62400b1c39708f530be" protocol=ttrpc version=3 Jul 1 08:45:40.333319 systemd[1]: Started cri-containerd-70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8.scope - libcontainer container 70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8. Jul 1 08:45:40.382388 containerd[1589]: time="2025-07-01T08:45:40.382345183Z" level=info msg="StartContainer for \"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\" returns successfully" Jul 1 08:45:40.493202 kubelet[2783]: I0701 08:45:40.491702 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-jm5hg" podStartSLOduration=28.090924126 podStartE2EDuration="40.491683131s" podCreationTimestamp="2025-07-01 08:45:00 +0000 UTC" firstStartedPulling="2025-07-01 08:45:27.887422797 +0000 UTC m=+48.822676054" lastFinishedPulling="2025-07-01 08:45:40.288181802 +0000 UTC m=+61.223435059" observedRunningTime="2025-07-01 08:45:40.491494028 +0000 UTC m=+61.426747295" watchObservedRunningTime="2025-07-01 08:45:40.491683131 +0000 UTC m=+61.426936388" Jul 1 08:45:41.580879 containerd[1589]: time="2025-07-01T08:45:41.580815261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\" id:\"0a09054f561b4411af7eb5d0ba9199003f7d8ad934981238d7769b42012ac0dd\" pid:5352 exit_status:1 exited_at:{seconds:1751359541 nanos:580355719}" Jul 1 08:45:42.002340 containerd[1589]: time="2025-07-01T08:45:42.002278581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:42.003370 containerd[1589]: time="2025-07-01T08:45:42.003287606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 1 08:45:42.004710 containerd[1589]: time="2025-07-01T08:45:42.004682732Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:42.007902 containerd[1589]: time="2025-07-01T08:45:42.007855887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:42.008500 containerd[1589]: time="2025-07-01T08:45:42.008463653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.720056978s" Jul 1 08:45:42.008564 containerd[1589]: time="2025-07-01T08:45:42.008505213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 1 08:45:42.009779 containerd[1589]: time="2025-07-01T08:45:42.009565175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 1 08:45:42.015585 containerd[1589]: time="2025-07-01T08:45:42.015492612Z" level=info msg="CreateContainer within sandbox \"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 1 08:45:42.026182 containerd[1589]: time="2025-07-01T08:45:42.026095566Z" level=info msg="Container c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:42.037212 containerd[1589]: time="2025-07-01T08:45:42.037144475Z" level=info msg="CreateContainer within sandbox \"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0\"" Jul 1 08:45:42.037820 containerd[1589]: time="2025-07-01T08:45:42.037798008Z" level=info msg="StartContainer for \"c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0\"" Jul 1 08:45:42.038862 containerd[1589]: time="2025-07-01T08:45:42.038825759Z" level=info msg="connecting to shim c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0" address="unix:///run/containerd/s/3442020d92344c9a1e20fab21a8fe7268a171937dae8b90d875428d10d0ac36a" protocol=ttrpc version=3 Jul 1 08:45:42.067506 systemd[1]: Started cri-containerd-c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0.scope - libcontainer container c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0. Jul 1 08:45:42.141507 containerd[1589]: time="2025-07-01T08:45:42.141461226Z" level=info msg="StartContainer for \"c771b0efefbd4b5c353bd53fee40ca2cfc6be146c6fb63a95742d49fd5f872c0\" returns successfully" Jul 1 08:45:42.562292 containerd[1589]: time="2025-07-01T08:45:42.562213929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\" id:\"95c96402690fa956abb7e24a461b3876f1c410eddc4ebb91f36d26bfbf5d6e51\" pid:5413 exit_status:1 exited_at:{seconds:1751359542 nanos:561843980}" Jul 1 08:45:44.404394 systemd[1]: Started sshd@13-10.0.0.127:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jul 1 08:45:44.482557 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:44.484324 sshd-session[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:44.490402 systemd-logind[1566]: New session 14 of user core. Jul 1 08:45:44.498315 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 1 08:45:44.688501 sshd[5431]: Connection closed by 10.0.0.1 port 36842 Jul 1 08:45:44.689375 sshd-session[5428]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:44.694615 systemd[1]: sshd@13-10.0.0.127:22-10.0.0.1:36842.service: Deactivated successfully. Jul 1 08:45:44.696873 systemd[1]: session-14.scope: Deactivated successfully. Jul 1 08:45:44.697802 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Jul 1 08:45:44.698973 systemd-logind[1566]: Removed session 14. Jul 1 08:45:44.994065 containerd[1589]: time="2025-07-01T08:45:44.993885861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:44.994988 containerd[1589]: time="2025-07-01T08:45:44.994949308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 1 08:45:44.996194 containerd[1589]: time="2025-07-01T08:45:44.996135881Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:44.998482 containerd[1589]: time="2025-07-01T08:45:44.998411219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:44.998941 containerd[1589]: time="2025-07-01T08:45:44.998906257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.989295295s" Jul 1 08:45:44.998974 containerd[1589]: time="2025-07-01T08:45:44.998940553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 1 08:45:44.999937 containerd[1589]: time="2025-07-01T08:45:44.999720216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 1 08:45:45.003918 containerd[1589]: time="2025-07-01T08:45:45.003891123Z" level=info msg="CreateContainer within sandbox \"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 1 08:45:45.015800 containerd[1589]: time="2025-07-01T08:45:45.015738415Z" level=info msg="Container 71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:45.027467 containerd[1589]: time="2025-07-01T08:45:45.027418977Z" level=info msg="CreateContainer within sandbox \"366137c54142c1f493b57d4ae9ab14f18d38b299d787cf6d494c2e5c8ce6daad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4\"" Jul 1 08:45:45.028234 containerd[1589]: time="2025-07-01T08:45:45.028041488Z" level=info msg="StartContainer for \"71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4\"" Jul 1 08:45:45.029807 containerd[1589]: time="2025-07-01T08:45:45.029762964Z" level=info msg="connecting to shim 71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4" address="unix:///run/containerd/s/b7a0ad0227c65c2609fab44dfe333572b858066282cd3ba12facae6e9e480b8e" protocol=ttrpc version=3 Jul 1 08:45:45.057361 systemd[1]: Started cri-containerd-71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4.scope - libcontainer container 71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4. Jul 1 08:45:45.104837 containerd[1589]: time="2025-07-01T08:45:45.104790554Z" level=info msg="StartContainer for \"71d60f3e6620ae677f2927f1d37573fa95b3c23231d6b5a3b7a3ef0752ca29b4\" returns successfully" Jul 1 08:45:45.241554 kubelet[2783]: I0701 08:45:45.241507 2783 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 1 08:45:45.262449 kubelet[2783]: I0701 08:45:45.262292 2783 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 1 08:45:49.703260 systemd[1]: Started sshd@14-10.0.0.127:22-10.0.0.1:46616.service - OpenSSH per-connection server daemon (10.0.0.1:46616). Jul 1 08:45:49.777193 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 46616 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:49.779136 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:49.785222 systemd-logind[1566]: New session 15 of user core. Jul 1 08:45:49.793360 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 1 08:45:49.928072 sshd[5498]: Connection closed by 10.0.0.1 port 46616 Jul 1 08:45:49.928471 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:49.933212 systemd[1]: sshd@14-10.0.0.127:22-10.0.0.1:46616.service: Deactivated successfully. Jul 1 08:45:49.935655 systemd[1]: session-15.scope: Deactivated successfully. Jul 1 08:45:49.936708 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Jul 1 08:45:49.939326 systemd-logind[1566]: Removed session 15. Jul 1 08:45:52.707051 containerd[1589]: time="2025-07-01T08:45:52.706969261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:52.735486 containerd[1589]: time="2025-07-01T08:45:52.735336937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 1 08:45:52.763684 containerd[1589]: time="2025-07-01T08:45:52.763605163Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:52.783641 containerd[1589]: time="2025-07-01T08:45:52.783556340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:52.784308 containerd[1589]: time="2025-07-01T08:45:52.784251797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.784497847s" Jul 1 08:45:52.784308 containerd[1589]: time="2025-07-01T08:45:52.784299808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 1 08:45:52.785375 containerd[1589]: time="2025-07-01T08:45:52.785338761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 1 08:45:52.812143 containerd[1589]: time="2025-07-01T08:45:52.812078761Z" level=info msg="CreateContainer within sandbox \"aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 1 08:45:52.827838 containerd[1589]: time="2025-07-01T08:45:52.827789688Z" level=info msg="Container 2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:52.837200 containerd[1589]: time="2025-07-01T08:45:52.837149486Z" level=info msg="CreateContainer within sandbox \"aa10c041d2e7878c4cac9640306c6df81afa21c7f3024ad3c314780474715bca\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\"" Jul 1 08:45:52.837860 containerd[1589]: time="2025-07-01T08:45:52.837648779Z" level=info msg="StartContainer for \"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\"" Jul 1 08:45:52.838806 containerd[1589]: time="2025-07-01T08:45:52.838767884Z" level=info msg="connecting to shim 2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352" address="unix:///run/containerd/s/b91d13ac7ba19ea33a2df07dd0065b6b876aabfd95097324f1f5e4bc7ec79ec8" protocol=ttrpc version=3 Jul 1 08:45:52.872405 systemd[1]: Started cri-containerd-2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352.scope - libcontainer container 2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352. Jul 1 08:45:52.926730 containerd[1589]: time="2025-07-01T08:45:52.926681734Z" level=info msg="StartContainer for \"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\" returns successfully" Jul 1 08:45:53.549660 containerd[1589]: time="2025-07-01T08:45:53.549613426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\" id:\"c81fd6f68887ba5e0d01dedcd6db4f8a751883af8bb6f33e2129da48ff8b18ea\" pid:5570 exited_at:{seconds:1751359553 nanos:549367447}" Jul 1 08:45:53.903500 kubelet[2783]: I0701 08:45:53.903421 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z5dkh" podStartSLOduration=36.59414144 podStartE2EDuration="53.903404836s" podCreationTimestamp="2025-07-01 08:45:00 +0000 UTC" firstStartedPulling="2025-07-01 08:45:27.69036254 +0000 UTC m=+48.625615787" lastFinishedPulling="2025-07-01 08:45:44.999625926 +0000 UTC m=+65.934879183" observedRunningTime="2025-07-01 08:45:45.507539201 +0000 UTC m=+66.442792478" watchObservedRunningTime="2025-07-01 08:45:53.903404836 +0000 UTC m=+74.838658093" Jul 1 08:45:53.904229 kubelet[2783]: I0701 08:45:53.903843 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78898dc79-xzxvq" podStartSLOduration=38.781423709 podStartE2EDuration="53.903838362s" podCreationTimestamp="2025-07-01 08:45:00 +0000 UTC" firstStartedPulling="2025-07-01 08:45:37.662686785 +0000 UTC m=+58.597940052" lastFinishedPulling="2025-07-01 08:45:52.785101448 +0000 UTC m=+73.720354705" observedRunningTime="2025-07-01 08:45:53.90315001 +0000 UTC m=+74.838403267" watchObservedRunningTime="2025-07-01 08:45:53.903838362 +0000 UTC m=+74.839091619" Jul 1 08:45:54.941811 systemd[1]: Started sshd@15-10.0.0.127:22-10.0.0.1:46618.service - OpenSSH per-connection server daemon (10.0.0.1:46618). Jul 1 08:45:55.016289 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 46618 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:45:55.018079 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:45:55.022931 systemd-logind[1566]: New session 16 of user core. Jul 1 08:45:55.027282 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 1 08:45:55.228015 sshd[5584]: Connection closed by 10.0.0.1 port 46618 Jul 1 08:45:55.228934 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Jul 1 08:45:55.234765 systemd[1]: sshd@15-10.0.0.127:22-10.0.0.1:46618.service: Deactivated successfully. Jul 1 08:45:55.237708 systemd[1]: session-16.scope: Deactivated successfully. Jul 1 08:45:55.239189 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Jul 1 08:45:55.241489 systemd-logind[1566]: Removed session 16. Jul 1 08:45:56.162531 kubelet[2783]: E0701 08:45:56.162481 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:45:57.093743 containerd[1589]: time="2025-07-01T08:45:57.093694587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffc63f112ba90bfe68dab9f47cf7f6ac365000e24ed2f30fb5c845481099fea0\" id:\"f5d6edafd010402c13df9c56a93afb102042e09495f4002aaace739995f9738b\" pid:5608 exited_at:{seconds:1751359557 nanos:93314633}" Jul 1 08:45:57.446177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67471606.mount: Deactivated successfully. Jul 1 08:45:57.877637 containerd[1589]: time="2025-07-01T08:45:57.877576353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:57.881629 containerd[1589]: time="2025-07-01T08:45:57.881544199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 1 08:45:57.883475 containerd[1589]: time="2025-07-01T08:45:57.883408719Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:57.886872 containerd[1589]: time="2025-07-01T08:45:57.886793855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:45:57.887663 containerd[1589]: time="2025-07-01T08:45:57.887604829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.102225351s" Jul 1 08:45:57.887663 containerd[1589]: time="2025-07-01T08:45:57.887655806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 1 08:45:57.896395 containerd[1589]: time="2025-07-01T08:45:57.896349822Z" level=info msg="CreateContainer within sandbox \"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 1 08:45:57.978143 containerd[1589]: time="2025-07-01T08:45:57.978059051Z" level=info msg="Container 67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:45:58.011552 containerd[1589]: time="2025-07-01T08:45:58.011472309Z" level=info msg="CreateContainer within sandbox \"2644e5ce2803139269d718ed689975c235304cfb50f3eb6816f7cff49e7df6ef\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9\"" Jul 1 08:45:58.013417 containerd[1589]: time="2025-07-01T08:45:58.012128377Z" level=info msg="StartContainer for \"67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9\"" Jul 1 08:45:58.013417 containerd[1589]: time="2025-07-01T08:45:58.013315557Z" level=info msg="connecting to shim 67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9" address="unix:///run/containerd/s/3442020d92344c9a1e20fab21a8fe7268a171937dae8b90d875428d10d0ac36a" protocol=ttrpc version=3 Jul 1 08:45:58.041389 systemd[1]: Started cri-containerd-67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9.scope - libcontainer container 67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9. Jul 1 08:45:58.106742 containerd[1589]: time="2025-07-01T08:45:58.106679454Z" level=info msg="StartContainer for \"67372368289f410d7ef410a583179cf3adde55f6a73374a1489a268243bb9ee9\" returns successfully" Jul 1 08:45:58.588417 kubelet[2783]: I0701 08:45:58.588336 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-84ff7c8cdf-hgbsf" podStartSLOduration=2.703267044 podStartE2EDuration="32.58831292s" podCreationTimestamp="2025-07-01 08:45:26 +0000 UTC" firstStartedPulling="2025-07-01 08:45:28.003757896 +0000 UTC m=+48.939011153" lastFinishedPulling="2025-07-01 08:45:57.888803772 +0000 UTC m=+78.824057029" observedRunningTime="2025-07-01 08:45:58.588254468 +0000 UTC m=+79.523507725" watchObservedRunningTime="2025-07-01 08:45:58.58831292 +0000 UTC m=+79.523566177" Jul 1 08:46:00.163330 kubelet[2783]: E0701 08:46:00.163278 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:46:00.247043 systemd[1]: Started sshd@16-10.0.0.127:22-10.0.0.1:38778.service - OpenSSH per-connection server daemon (10.0.0.1:38778). Jul 1 08:46:00.327567 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 38778 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:00.329023 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:00.333293 systemd-logind[1566]: New session 17 of user core. Jul 1 08:46:00.344336 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 1 08:46:00.665588 sshd[5669]: Connection closed by 10.0.0.1 port 38778 Jul 1 08:46:00.665943 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:00.671112 systemd[1]: sshd@16-10.0.0.127:22-10.0.0.1:38778.service: Deactivated successfully. Jul 1 08:46:00.674120 systemd[1]: session-17.scope: Deactivated successfully. Jul 1 08:46:00.675122 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Jul 1 08:46:00.677034 systemd-logind[1566]: Removed session 17. Jul 1 08:46:02.163230 kubelet[2783]: E0701 08:46:02.163181 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:46:04.336279 containerd[1589]: time="2025-07-01T08:46:04.336217218Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\" id:\"ff9cf39d683d1cf32dc51a50ee31f75fa6d4943a78e8eb7e52b48a3222ff4daa\" pid:5694 exited_at:{seconds:1751359564 nanos:335702441}" Jul 1 08:46:05.680530 systemd[1]: Started sshd@17-10.0.0.127:22-10.0.0.1:38780.service - OpenSSH per-connection server daemon (10.0.0.1:38780). Jul 1 08:46:05.766113 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 38780 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:05.767906 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:05.773476 systemd-logind[1566]: New session 18 of user core. Jul 1 08:46:05.781365 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 1 08:46:06.053449 sshd[5710]: Connection closed by 10.0.0.1 port 38780 Jul 1 08:46:06.053753 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:06.066616 systemd[1]: sshd@17-10.0.0.127:22-10.0.0.1:38780.service: Deactivated successfully. Jul 1 08:46:06.069198 systemd[1]: session-18.scope: Deactivated successfully. Jul 1 08:46:06.070536 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Jul 1 08:46:06.075139 systemd[1]: Started sshd@18-10.0.0.127:22-10.0.0.1:38790.service - OpenSSH per-connection server daemon (10.0.0.1:38790). Jul 1 08:46:06.076505 systemd-logind[1566]: Removed session 18. Jul 1 08:46:06.131831 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 38790 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:06.133522 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:06.138879 systemd-logind[1566]: New session 19 of user core. Jul 1 08:46:06.150485 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 1 08:46:06.163106 kubelet[2783]: E0701 08:46:06.163048 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:46:06.453217 sshd[5726]: Connection closed by 10.0.0.1 port 38790 Jul 1 08:46:06.453670 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:06.462801 systemd[1]: sshd@18-10.0.0.127:22-10.0.0.1:38790.service: Deactivated successfully. Jul 1 08:46:06.465639 systemd[1]: session-19.scope: Deactivated successfully. Jul 1 08:46:06.466801 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Jul 1 08:46:06.471076 systemd[1]: Started sshd@19-10.0.0.127:22-10.0.0.1:38798.service - OpenSSH per-connection server daemon (10.0.0.1:38798). Jul 1 08:46:06.472342 systemd-logind[1566]: Removed session 19. Jul 1 08:46:06.547518 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 38798 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:06.550421 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:06.573818 systemd-logind[1566]: New session 20 of user core. Jul 1 08:46:06.584842 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 1 08:46:07.615308 sshd[5742]: Connection closed by 10.0.0.1 port 38798 Jul 1 08:46:07.616542 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:07.627369 systemd[1]: sshd@19-10.0.0.127:22-10.0.0.1:38798.service: Deactivated successfully. Jul 1 08:46:07.630831 systemd[1]: session-20.scope: Deactivated successfully. Jul 1 08:46:07.636971 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Jul 1 08:46:07.643766 systemd[1]: Started sshd@20-10.0.0.127:22-10.0.0.1:38800.service - OpenSSH per-connection server daemon (10.0.0.1:38800). Jul 1 08:46:07.645601 systemd-logind[1566]: Removed session 20. Jul 1 08:46:07.712643 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 38800 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:07.714715 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:07.720392 systemd-logind[1566]: New session 21 of user core. Jul 1 08:46:07.728400 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 1 08:46:08.089331 sshd[5764]: Connection closed by 10.0.0.1 port 38800 Jul 1 08:46:08.090520 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:08.100809 systemd[1]: sshd@20-10.0.0.127:22-10.0.0.1:38800.service: Deactivated successfully. Jul 1 08:46:08.103878 systemd[1]: session-21.scope: Deactivated successfully. Jul 1 08:46:08.105376 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Jul 1 08:46:08.108157 systemd[1]: Started sshd@21-10.0.0.127:22-10.0.0.1:43288.service - OpenSSH per-connection server daemon (10.0.0.1:43288). Jul 1 08:46:08.109192 systemd-logind[1566]: Removed session 21. Jul 1 08:46:08.171647 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 43288 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:08.173403 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:08.178310 systemd-logind[1566]: New session 22 of user core. Jul 1 08:46:08.188361 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 1 08:46:08.315619 sshd[5785]: Connection closed by 10.0.0.1 port 43288 Jul 1 08:46:08.315958 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:08.321069 systemd[1]: sshd@21-10.0.0.127:22-10.0.0.1:43288.service: Deactivated successfully. Jul 1 08:46:08.323766 systemd[1]: session-22.scope: Deactivated successfully. Jul 1 08:46:08.324783 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Jul 1 08:46:08.326227 systemd-logind[1566]: Removed session 22. Jul 1 08:46:12.500557 containerd[1589]: time="2025-07-01T08:46:12.500507754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\" id:\"8e9b0c897410734d2bf63ab56a69dd4e3261b07822da289394796bbbefbca10f\" pid:5810 exited_at:{seconds:1751359572 nanos:498685399}" Jul 1 08:46:12.582866 containerd[1589]: time="2025-07-01T08:46:12.582817515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70f359e862d9f3d2b0691ab10d070220ccf6a7027bfa98737d31a8d1c2c8dee8\" id:\"f3edda7eb66ea5b6e1bc55e0ba822c7b608facc687cf0b7b710fff0c6034509f\" pid:5834 exited_at:{seconds:1751359572 nanos:582520361}" Jul 1 08:46:13.331770 systemd[1]: Started sshd@22-10.0.0.127:22-10.0.0.1:43300.service - OpenSSH per-connection server daemon (10.0.0.1:43300). Jul 1 08:46:13.387425 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 43300 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:13.389638 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:13.396389 systemd-logind[1566]: New session 23 of user core. Jul 1 08:46:13.405415 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 1 08:46:13.755988 sshd[5851]: Connection closed by 10.0.0.1 port 43300 Jul 1 08:46:13.756354 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:13.760712 systemd[1]: sshd@22-10.0.0.127:22-10.0.0.1:43300.service: Deactivated successfully. Jul 1 08:46:13.762915 systemd[1]: session-23.scope: Deactivated successfully. Jul 1 08:46:13.763810 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Jul 1 08:46:13.765459 systemd-logind[1566]: Removed session 23. Jul 1 08:46:18.767692 systemd[1]: Started sshd@23-10.0.0.127:22-10.0.0.1:53210.service - OpenSSH per-connection server daemon (10.0.0.1:53210). Jul 1 08:46:18.917116 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 53210 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:18.917199 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:18.981104 systemd-logind[1566]: New session 24 of user core. Jul 1 08:46:18.990285 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 1 08:46:19.168851 sshd[5873]: Connection closed by 10.0.0.1 port 53210 Jul 1 08:46:19.169273 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:19.175006 systemd[1]: sshd@23-10.0.0.127:22-10.0.0.1:53210.service: Deactivated successfully. Jul 1 08:46:19.177362 systemd[1]: session-24.scope: Deactivated successfully. Jul 1 08:46:19.178228 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Jul 1 08:46:19.179578 systemd-logind[1566]: Removed session 24. Jul 1 08:46:20.163220 kubelet[2783]: E0701 08:46:20.162963 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:46:23.574470 containerd[1589]: time="2025-07-01T08:46:23.574402626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2444fe21e05fb070cec9f4a3774b2b9311748cb88be44815bee34cb816f61352\" id:\"1537a935b2e986c6dc79483cf358649e01bb6a8ca581f1ee87e13a6b6500bd07\" pid:5900 exited_at:{seconds:1751359583 nanos:574201556}" Jul 1 08:46:24.183849 systemd[1]: Started sshd@24-10.0.0.127:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Jul 1 08:46:24.258110 sshd[5911]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:46:24.260211 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:46:24.265563 systemd-logind[1566]: New session 25 of user core. Jul 1 08:46:24.276339 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 1 08:46:24.426883 sshd[5914]: Connection closed by 10.0.0.1 port 53226 Jul 1 08:46:24.427223 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Jul 1 08:46:24.431219 systemd[1]: sshd@24-10.0.0.127:22-10.0.0.1:53226.service: Deactivated successfully. Jul 1 08:46:24.433043 systemd[1]: session-25.scope: Deactivated successfully. Jul 1 08:46:24.433856 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Jul 1 08:46:24.435070 systemd-logind[1566]: Removed session 25.