Jul 1 08:36:56.904126 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jun 30 19:26:54 -00 2025 Jul 1 08:36:56.904154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:36:56.904168 kernel: BIOS-provided physical RAM map: Jul 1 08:36:56.904177 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 1 08:36:56.904186 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 1 08:36:56.904195 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 1 08:36:56.904205 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 1 08:36:56.904214 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 1 08:36:56.904230 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 1 08:36:56.904239 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 1 08:36:56.904247 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jul 1 08:36:56.904256 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 1 08:36:56.904265 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 1 08:36:56.904274 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 1 08:36:56.904288 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 1 08:36:56.904295 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 1 08:36:56.904305 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 1 08:36:56.904314 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 1 08:36:56.904324 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 1 08:36:56.904334 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 1 08:36:56.904343 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 1 08:36:56.904353 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 1 08:36:56.904362 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 1 08:36:56.904371 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:36:56.904381 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 1 08:36:56.904394 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:36:56.904403 kernel: NX (Execute Disable) protection: active Jul 1 08:36:56.904413 kernel: APIC: Static calls initialized Jul 1 08:36:56.904422 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jul 1 08:36:56.904432 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jul 1 08:36:56.904441 kernel: extended physical RAM map: Jul 1 08:36:56.904451 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 1 08:36:56.904460 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 1 08:36:56.904469 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 1 08:36:56.904478 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 1 08:36:56.904488 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 1 08:36:56.904500 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jul 1 08:36:56.904510 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jul 1 08:36:56.904519 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jul 1 08:36:56.904645 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jul 1 08:36:56.904663 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jul 1 08:36:56.904673 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jul 1 08:36:56.904685 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jul 1 08:36:56.904695 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jul 1 08:36:56.904705 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jul 1 08:36:56.904715 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jul 1 08:36:56.904725 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jul 1 08:36:56.904735 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 1 08:36:56.904745 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jul 1 08:36:56.904755 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jul 1 08:36:56.904765 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jul 1 08:36:56.904775 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jul 1 08:36:56.904788 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jul 1 08:36:56.904798 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 1 08:36:56.904809 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 1 08:36:56.904818 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 1 08:36:56.904828 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jul 1 08:36:56.904838 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 1 08:36:56.904852 kernel: efi: EFI v2.7 by EDK II Jul 1 08:36:56.904862 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jul 1 08:36:56.904872 kernel: random: crng init done Jul 1 08:36:56.904886 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jul 1 08:36:56.904898 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jul 1 08:36:56.904917 kernel: secureboot: Secure boot disabled Jul 1 08:36:56.904928 kernel: SMBIOS 2.8 present. Jul 1 08:36:56.904941 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 1 08:36:56.904953 kernel: DMI: Memory slots populated: 1/1 Jul 1 08:36:56.904964 kernel: Hypervisor detected: KVM Jul 1 08:36:56.904976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 1 08:36:56.904989 kernel: kvm-clock: using sched offset of 5133671844 cycles Jul 1 08:36:56.905002 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 1 08:36:56.905015 kernel: tsc: Detected 2794.750 MHz processor Jul 1 08:36:56.905027 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 1 08:36:56.905039 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 1 08:36:56.905055 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jul 1 08:36:56.905069 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 1 08:36:56.905090 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 1 08:36:56.905101 kernel: Using GB pages for direct mapping Jul 1 08:36:56.905112 kernel: ACPI: Early table checksum verification disabled Jul 1 08:36:56.905122 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 1 08:36:56.905133 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 1 08:36:56.905143 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905153 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905166 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 1 08:36:56.905176 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905187 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905197 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905229 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 1 08:36:56.905249 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 1 08:36:56.905260 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 1 08:36:56.905270 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 1 08:36:56.905284 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 1 08:36:56.905294 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 1 08:36:56.905304 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 1 08:36:56.905318 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 1 08:36:56.905329 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 1 08:36:56.905339 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 1 08:36:56.905349 kernel: No NUMA configuration found Jul 1 08:36:56.905359 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jul 1 08:36:56.905369 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jul 1 08:36:56.905379 kernel: Zone ranges: Jul 1 08:36:56.905394 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 1 08:36:56.905404 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jul 1 08:36:56.905414 kernel: Normal empty Jul 1 08:36:56.905424 kernel: Device empty Jul 1 08:36:56.905434 kernel: Movable zone start for each node Jul 1 08:36:56.905444 kernel: Early memory node ranges Jul 1 08:36:56.905454 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 1 08:36:56.905464 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 1 08:36:56.905479 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 1 08:36:56.905492 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jul 1 08:36:56.905503 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jul 1 08:36:56.905513 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jul 1 08:36:56.905544 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jul 1 08:36:56.905554 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jul 1 08:36:56.905564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jul 1 08:36:56.905574 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:36:56.905587 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 1 08:36:56.905611 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 1 08:36:56.905621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 1 08:36:56.905631 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jul 1 08:36:56.905642 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jul 1 08:36:56.905656 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 1 08:36:56.905667 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 1 08:36:56.905677 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jul 1 08:36:56.905688 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 1 08:36:56.905699 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 1 08:36:56.905713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 1 08:36:56.905724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 1 08:36:56.905735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 1 08:36:56.905746 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 1 08:36:56.905757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 1 08:36:56.905768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 1 08:36:56.905779 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 1 08:36:56.905792 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 1 08:36:56.905805 kernel: TSC deadline timer available Jul 1 08:36:56.905822 kernel: CPU topo: Max. logical packages: 1 Jul 1 08:36:56.905835 kernel: CPU topo: Max. logical dies: 1 Jul 1 08:36:56.905849 kernel: CPU topo: Max. dies per package: 1 Jul 1 08:36:56.905862 kernel: CPU topo: Max. threads per core: 1 Jul 1 08:36:56.905875 kernel: CPU topo: Num. cores per package: 4 Jul 1 08:36:56.905888 kernel: CPU topo: Num. threads per package: 4 Jul 1 08:36:56.905901 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 1 08:36:56.905914 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 1 08:36:56.905925 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 1 08:36:56.905939 kernel: kvm-guest: setup PV sched yield Jul 1 08:36:56.905950 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 1 08:36:56.905960 kernel: Booting paravirtualized kernel on KVM Jul 1 08:36:56.905971 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 1 08:36:56.905982 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 1 08:36:56.905993 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 1 08:36:56.906004 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 1 08:36:56.906014 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 1 08:36:56.906025 kernel: kvm-guest: PV spinlocks enabled Jul 1 08:36:56.906039 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 1 08:36:56.906051 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:36:56.906067 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 1 08:36:56.906077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 1 08:36:56.906097 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 1 08:36:56.906108 kernel: Fallback order for Node 0: 0 Jul 1 08:36:56.906118 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jul 1 08:36:56.906129 kernel: Policy zone: DMA32 Jul 1 08:36:56.906140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 1 08:36:56.906166 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 1 08:36:56.906178 kernel: ftrace: allocating 40095 entries in 157 pages Jul 1 08:36:56.906188 kernel: ftrace: allocated 157 pages with 5 groups Jul 1 08:36:56.906199 kernel: Dynamic Preempt: voluntary Jul 1 08:36:56.906209 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 1 08:36:56.906221 kernel: rcu: RCU event tracing is enabled. Jul 1 08:36:56.906232 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 1 08:36:56.906243 kernel: Trampoline variant of Tasks RCU enabled. Jul 1 08:36:56.906254 kernel: Rude variant of Tasks RCU enabled. Jul 1 08:36:56.906269 kernel: Tracing variant of Tasks RCU enabled. Jul 1 08:36:56.906279 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 1 08:36:56.906294 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 1 08:36:56.906305 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:36:56.906316 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:36:56.906326 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 1 08:36:56.906337 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 1 08:36:56.906348 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 1 08:36:56.906359 kernel: Console: colour dummy device 80x25 Jul 1 08:36:56.906373 kernel: printk: legacy console [ttyS0] enabled Jul 1 08:36:56.906384 kernel: ACPI: Core revision 20240827 Jul 1 08:36:56.906394 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 1 08:36:56.906405 kernel: APIC: Switch to symmetric I/O mode setup Jul 1 08:36:56.906416 kernel: x2apic enabled Jul 1 08:36:56.906427 kernel: APIC: Switched APIC routing to: physical x2apic Jul 1 08:36:56.906438 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 1 08:36:56.906449 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 1 08:36:56.906459 kernel: kvm-guest: setup PV IPIs Jul 1 08:36:56.906473 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 1 08:36:56.906484 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:36:56.906495 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 1 08:36:56.906506 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 1 08:36:56.906517 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 1 08:36:56.906544 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 1 08:36:56.906555 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 1 08:36:56.906565 kernel: Spectre V2 : Mitigation: Retpolines Jul 1 08:36:56.906580 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 1 08:36:56.906591 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 1 08:36:56.906601 kernel: RETBleed: Mitigation: untrained return thunk Jul 1 08:36:56.906612 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 1 08:36:56.906626 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 1 08:36:56.906637 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 1 08:36:56.906649 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 1 08:36:56.906660 kernel: x86/bugs: return thunk changed Jul 1 08:36:56.906670 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 1 08:36:56.906684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 1 08:36:56.906695 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 1 08:36:56.906706 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 1 08:36:56.906716 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 1 08:36:56.906727 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 1 08:36:56.906738 kernel: Freeing SMP alternatives memory: 32K Jul 1 08:36:56.906749 kernel: pid_max: default: 32768 minimum: 301 Jul 1 08:36:56.906759 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 1 08:36:56.906770 kernel: landlock: Up and running. Jul 1 08:36:56.906784 kernel: SELinux: Initializing. Jul 1 08:36:56.906795 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:36:56.906806 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 1 08:36:56.906817 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 1 08:36:56.906828 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 1 08:36:56.906839 kernel: ... version: 0 Jul 1 08:36:56.906849 kernel: ... bit width: 48 Jul 1 08:36:56.906860 kernel: ... generic registers: 6 Jul 1 08:36:56.906871 kernel: ... value mask: 0000ffffffffffff Jul 1 08:36:56.906885 kernel: ... max period: 00007fffffffffff Jul 1 08:36:56.906896 kernel: ... fixed-purpose events: 0 Jul 1 08:36:56.906907 kernel: ... event mask: 000000000000003f Jul 1 08:36:56.906917 kernel: signal: max sigframe size: 1776 Jul 1 08:36:56.906928 kernel: rcu: Hierarchical SRCU implementation. Jul 1 08:36:56.906939 kernel: rcu: Max phase no-delay instances is 400. Jul 1 08:36:56.906953 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 1 08:36:56.906965 kernel: smp: Bringing up secondary CPUs ... Jul 1 08:36:56.906978 kernel: smpboot: x86: Booting SMP configuration: Jul 1 08:36:56.907007 kernel: .... node #0, CPUs: #1 #2 #3 Jul 1 08:36:56.907021 kernel: smp: Brought up 1 node, 4 CPUs Jul 1 08:36:56.907032 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 1 08:36:56.907043 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54508K init, 2460K bss, 137196K reserved, 0K cma-reserved) Jul 1 08:36:56.907053 kernel: devtmpfs: initialized Jul 1 08:36:56.907064 kernel: x86/mm: Memory block size: 128MB Jul 1 08:36:56.907090 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 1 08:36:56.907101 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 1 08:36:56.907112 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jul 1 08:36:56.907127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 1 08:36:56.907138 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jul 1 08:36:56.907149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 1 08:36:56.907160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 1 08:36:56.907171 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 1 08:36:56.907181 kernel: pinctrl core: initialized pinctrl subsystem Jul 1 08:36:56.907192 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 1 08:36:56.907203 kernel: audit: initializing netlink subsys (disabled) Jul 1 08:36:56.907213 kernel: audit: type=2000 audit(1751359013.990:1): state=initialized audit_enabled=0 res=1 Jul 1 08:36:56.907228 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 1 08:36:56.907237 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 1 08:36:56.907248 kernel: cpuidle: using governor menu Jul 1 08:36:56.907258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 1 08:36:56.907269 kernel: dca service started, version 1.12.1 Jul 1 08:36:56.907280 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 1 08:36:56.907290 kernel: PCI: Using configuration type 1 for base access Jul 1 08:36:56.907302 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 1 08:36:56.907312 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 1 08:36:56.907327 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 1 08:36:56.907338 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 1 08:36:56.907348 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 1 08:36:56.907359 kernel: ACPI: Added _OSI(Module Device) Jul 1 08:36:56.907370 kernel: ACPI: Added _OSI(Processor Device) Jul 1 08:36:56.907380 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 1 08:36:56.907390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 1 08:36:56.907401 kernel: ACPI: Interpreter enabled Jul 1 08:36:56.907411 kernel: ACPI: PM: (supports S0 S3 S5) Jul 1 08:36:56.907425 kernel: ACPI: Using IOAPIC for interrupt routing Jul 1 08:36:56.907435 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 1 08:36:56.907446 kernel: PCI: Using E820 reservations for host bridge windows Jul 1 08:36:56.907457 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 1 08:36:56.907467 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 1 08:36:56.907748 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 1 08:36:56.907931 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 1 08:36:56.908108 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 1 08:36:56.908126 kernel: PCI host bridge to bus 0000:00 Jul 1 08:36:56.908306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 1 08:36:56.908450 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 1 08:36:56.908617 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 1 08:36:56.908763 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 1 08:36:56.908914 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 1 08:36:56.909105 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 1 08:36:56.909250 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 1 08:36:56.909439 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 1 08:36:56.909666 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 1 08:36:56.909823 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 1 08:36:56.909972 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 1 08:36:56.910131 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 1 08:36:56.910254 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 1 08:36:56.910388 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 1 08:36:56.910511 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 1 08:36:56.910700 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 1 08:36:56.910823 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 1 08:36:56.910958 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 1 08:36:56.911094 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 1 08:36:56.911217 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 1 08:36:56.911336 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 1 08:36:56.911469 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 1 08:36:56.911619 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 1 08:36:56.911743 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 1 08:36:56.911863 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 1 08:36:56.911988 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 1 08:36:56.912135 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 1 08:36:56.912257 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 1 08:36:56.912412 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 1 08:36:56.912593 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 1 08:36:56.912757 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 1 08:36:56.912947 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 1 08:36:56.913131 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 1 08:36:56.913147 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 1 08:36:56.913159 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 1 08:36:56.913169 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 1 08:36:56.913180 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 1 08:36:56.913190 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 1 08:36:56.913201 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 1 08:36:56.913211 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 1 08:36:56.913226 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 1 08:36:56.913236 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 1 08:36:56.913247 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 1 08:36:56.913258 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 1 08:36:56.913268 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 1 08:36:56.913279 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 1 08:36:56.913289 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 1 08:36:56.913300 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 1 08:36:56.913310 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 1 08:36:56.913325 kernel: iommu: Default domain type: Translated Jul 1 08:36:56.913335 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 1 08:36:56.913346 kernel: efivars: Registered efivars operations Jul 1 08:36:56.913357 kernel: PCI: Using ACPI for IRQ routing Jul 1 08:36:56.913368 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 1 08:36:56.913378 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 1 08:36:56.913389 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jul 1 08:36:56.913399 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jul 1 08:36:56.913410 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jul 1 08:36:56.913423 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jul 1 08:36:56.913434 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jul 1 08:36:56.913445 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jul 1 08:36:56.913455 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jul 1 08:36:56.913642 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 1 08:36:56.913771 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 1 08:36:56.913891 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 1 08:36:56.913901 kernel: vgaarb: loaded Jul 1 08:36:56.913914 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 1 08:36:56.913922 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 1 08:36:56.913930 kernel: clocksource: Switched to clocksource kvm-clock Jul 1 08:36:56.913938 kernel: VFS: Disk quotas dquot_6.6.0 Jul 1 08:36:56.913947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 1 08:36:56.913955 kernel: pnp: PnP ACPI init Jul 1 08:36:56.914113 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 1 08:36:56.914144 kernel: pnp: PnP ACPI: found 6 devices Jul 1 08:36:56.914157 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 1 08:36:56.914165 kernel: NET: Registered PF_INET protocol family Jul 1 08:36:56.914174 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 1 08:36:56.914182 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 1 08:36:56.914190 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 1 08:36:56.914199 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 1 08:36:56.914207 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 1 08:36:56.914215 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 1 08:36:56.914224 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:36:56.914234 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 1 08:36:56.914243 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 1 08:36:56.914251 kernel: NET: Registered PF_XDP protocol family Jul 1 08:36:56.914376 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 1 08:36:56.914502 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 1 08:36:56.914683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 1 08:36:56.914835 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 1 08:36:56.914976 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 1 08:36:56.915132 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 1 08:36:56.915268 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 1 08:36:56.915404 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 1 08:36:56.915418 kernel: PCI: CLS 0 bytes, default 64 Jul 1 08:36:56.915429 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Jul 1 08:36:56.915441 kernel: Initialise system trusted keyrings Jul 1 08:36:56.915452 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 1 08:36:56.915463 kernel: Key type asymmetric registered Jul 1 08:36:56.915478 kernel: Asymmetric key parser 'x509' registered Jul 1 08:36:56.915489 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 1 08:36:56.915501 kernel: io scheduler mq-deadline registered Jul 1 08:36:56.915515 kernel: io scheduler kyber registered Jul 1 08:36:56.915549 kernel: io scheduler bfq registered Jul 1 08:36:56.915561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 1 08:36:56.915576 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 1 08:36:56.915588 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 1 08:36:56.915599 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 1 08:36:56.915610 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 1 08:36:56.915622 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 1 08:36:56.915633 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 1 08:36:56.915644 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 1 08:36:56.915655 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 1 08:36:56.915837 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 1 08:36:56.915862 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 1 08:36:56.916015 kernel: rtc_cmos 00:04: registered as rtc0 Jul 1 08:36:56.916165 kernel: rtc_cmos 00:04: setting system clock to 2025-07-01T08:36:56 UTC (1751359016) Jul 1 08:36:56.916306 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 1 08:36:56.916319 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 1 08:36:56.916327 kernel: efifb: probing for efifb Jul 1 08:36:56.916335 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 1 08:36:56.916343 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 1 08:36:56.916356 kernel: efifb: scrolling: redraw Jul 1 08:36:56.916364 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 1 08:36:56.916372 kernel: Console: switching to colour frame buffer device 160x50 Jul 1 08:36:56.916381 kernel: fb0: EFI VGA frame buffer device Jul 1 08:36:56.916389 kernel: pstore: Using crash dump compression: deflate Jul 1 08:36:56.916397 kernel: pstore: Registered efi_pstore as persistent store backend Jul 1 08:36:56.916405 kernel: NET: Registered PF_INET6 protocol family Jul 1 08:36:56.916413 kernel: Segment Routing with IPv6 Jul 1 08:36:56.916421 kernel: In-situ OAM (IOAM) with IPv6 Jul 1 08:36:56.916431 kernel: NET: Registered PF_PACKET protocol family Jul 1 08:36:56.916439 kernel: Key type dns_resolver registered Jul 1 08:36:56.916447 kernel: IPI shorthand broadcast: enabled Jul 1 08:36:56.916456 kernel: sched_clock: Marking stable (3764004062, 219351033)->(4058880841, -75525746) Jul 1 08:36:56.916464 kernel: registered taskstats version 1 Jul 1 08:36:56.916472 kernel: Loading compiled-in X.509 certificates Jul 1 08:36:56.916481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: bdab85da21e6e40e781d68d3bf17f0a40ee7357c' Jul 1 08:36:56.916489 kernel: Demotion targets for Node 0: null Jul 1 08:36:56.916497 kernel: Key type .fscrypt registered Jul 1 08:36:56.916507 kernel: Key type fscrypt-provisioning registered Jul 1 08:36:56.916516 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 1 08:36:56.916547 kernel: ima: Allocated hash algorithm: sha1 Jul 1 08:36:56.916558 kernel: ima: No architecture policies found Jul 1 08:36:56.916568 kernel: clk: Disabling unused clocks Jul 1 08:36:56.916576 kernel: Warning: unable to open an initial console. Jul 1 08:36:56.916584 kernel: Freeing unused kernel image (initmem) memory: 54508K Jul 1 08:36:56.916592 kernel: Write protecting the kernel read-only data: 24576k Jul 1 08:36:56.916600 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 1 08:36:56.916612 kernel: Run /init as init process Jul 1 08:36:56.916620 kernel: with arguments: Jul 1 08:36:56.916628 kernel: /init Jul 1 08:36:56.916636 kernel: with environment: Jul 1 08:36:56.916644 kernel: HOME=/ Jul 1 08:36:56.916652 kernel: TERM=linux Jul 1 08:36:56.916660 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 1 08:36:56.916669 systemd[1]: Successfully made /usr/ read-only. Jul 1 08:36:56.916682 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:36:56.916692 systemd[1]: Detected virtualization kvm. Jul 1 08:36:56.916700 systemd[1]: Detected architecture x86-64. Jul 1 08:36:56.916709 systemd[1]: Running in initrd. Jul 1 08:36:56.916717 systemd[1]: No hostname configured, using default hostname. Jul 1 08:36:56.916726 systemd[1]: Hostname set to . Jul 1 08:36:56.916734 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:36:56.916743 systemd[1]: Queued start job for default target initrd.target. Jul 1 08:36:56.916754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:36:56.916764 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:36:56.916774 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 1 08:36:56.916782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:36:56.916791 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 1 08:36:56.916801 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 1 08:36:56.916811 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 1 08:36:56.916822 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 1 08:36:56.916830 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:36:56.916839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:36:56.916848 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:36:56.916856 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:36:56.916865 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:36:56.916875 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:36:56.916886 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:36:56.916899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:36:56.916910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 1 08:36:56.916921 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 1 08:36:56.916932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:36:56.916943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:36:56.916953 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:36:56.916964 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:36:56.916975 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 1 08:36:56.916986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:36:56.916999 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 1 08:36:56.917010 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 1 08:36:56.917021 systemd[1]: Starting systemd-fsck-usr.service... Jul 1 08:36:56.917032 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:36:56.917043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:36:56.917054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:36:56.917064 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 1 08:36:56.917088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:36:56.917099 systemd[1]: Finished systemd-fsck-usr.service. Jul 1 08:36:56.917136 systemd-journald[218]: Collecting audit messages is disabled. Jul 1 08:36:56.917159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 1 08:36:56.917169 systemd-journald[218]: Journal started Jul 1 08:36:56.917187 systemd-journald[218]: Runtime Journal (/run/log/journal/56b943c499ab4891aea1d916fac591fc) is 6M, max 48.5M, 42.4M free. Jul 1 08:36:56.918099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:36:56.903970 systemd-modules-load[221]: Inserted module 'overlay' Jul 1 08:36:56.922575 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:36:56.926866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 1 08:36:56.932561 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 1 08:36:56.932659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:36:56.935952 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 1 08:36:56.936978 kernel: Bridge firewalling registered Jul 1 08:36:56.937116 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 1 08:36:56.937495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:36:56.940362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:36:56.941650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:36:56.964116 systemd-tmpfiles[238]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 1 08:36:56.965908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:36:56.968155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:36:56.973445 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:36:56.976865 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:36:56.979963 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 1 08:36:56.982991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:36:57.012282 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=03b744fdab9d0c2a6ce16909d1444c286b74402b7ab027472687ca33469d417f Jul 1 08:36:57.036956 systemd-resolved[261]: Positive Trust Anchors: Jul 1 08:36:57.036979 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:36:57.037015 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:36:57.040482 systemd-resolved[261]: Defaulting to hostname 'linux'. Jul 1 08:36:57.042202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:36:57.062758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:36:57.173603 kernel: SCSI subsystem initialized Jul 1 08:36:57.184601 kernel: Loading iSCSI transport class v2.0-870. Jul 1 08:36:57.198574 kernel: iscsi: registered transport (tcp) Jul 1 08:36:57.224594 kernel: iscsi: registered transport (qla4xxx) Jul 1 08:36:57.224688 kernel: QLogic iSCSI HBA Driver Jul 1 08:36:57.251836 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:36:57.289008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:36:57.289495 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:36:57.361982 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 1 08:36:57.364644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 1 08:36:57.430590 kernel: raid6: avx2x4 gen() 28719 MB/s Jul 1 08:36:57.447559 kernel: raid6: avx2x2 gen() 27530 MB/s Jul 1 08:36:57.464640 kernel: raid6: avx2x1 gen() 24665 MB/s Jul 1 08:36:57.464669 kernel: raid6: using algorithm avx2x4 gen() 28719 MB/s Jul 1 08:36:57.482606 kernel: raid6: .... xor() 6883 MB/s, rmw enabled Jul 1 08:36:57.482632 kernel: raid6: using avx2x2 recovery algorithm Jul 1 08:36:57.503561 kernel: xor: automatically using best checksumming function avx Jul 1 08:36:57.683607 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 1 08:36:57.693857 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:36:57.697116 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:36:57.728138 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jul 1 08:36:57.734295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:36:57.736905 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 1 08:36:57.763590 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 1 08:36:57.793833 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:36:57.795653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:36:57.870818 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:36:57.875780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 1 08:36:57.915576 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 1 08:36:57.918146 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 1 08:36:57.925555 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 1 08:36:57.925578 kernel: GPT:9289727 != 19775487 Jul 1 08:36:57.925589 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 1 08:36:57.925600 kernel: GPT:9289727 != 19775487 Jul 1 08:36:57.925610 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 1 08:36:57.925620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:36:57.928554 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 1 08:36:57.931580 kernel: cryptd: max_cpu_qlen set to 1000 Jul 1 08:36:57.944585 kernel: libata version 3.00 loaded. Jul 1 08:36:57.952621 kernel: ahci 0000:00:1f.2: version 3.0 Jul 1 08:36:57.954565 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 1 08:36:57.957962 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 1 08:36:57.958212 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 1 08:36:57.958354 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 1 08:36:57.962561 kernel: scsi host0: ahci Jul 1 08:36:57.964546 kernel: scsi host1: ahci Jul 1 08:36:57.964784 kernel: AES CTR mode by8 optimization enabled Jul 1 08:36:57.964796 kernel: scsi host2: ahci Jul 1 08:36:57.967566 kernel: scsi host3: ahci Jul 1 08:36:57.973897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:36:57.973973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:36:57.980585 kernel: scsi host4: ahci Jul 1 08:36:57.976878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:36:57.984888 kernel: scsi host5: ahci Jul 1 08:36:57.985115 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 1 08:36:57.988341 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 1 08:36:57.988371 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 1 08:36:57.988383 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 1 08:36:57.988394 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 1 08:36:57.988404 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 1 08:36:57.987420 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:36:57.995708 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:36:58.005013 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 1 08:36:58.014733 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 1 08:36:58.032739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:36:58.042944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:36:58.049877 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 1 08:36:58.064008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 1 08:36:58.068144 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 1 08:36:58.202441 disk-uuid[634]: Primary Header is updated. Jul 1 08:36:58.202441 disk-uuid[634]: Secondary Entries is updated. Jul 1 08:36:58.202441 disk-uuid[634]: Secondary Header is updated. Jul 1 08:36:58.206561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:36:58.210559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:36:58.301167 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 1 08:36:58.301250 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 1 08:36:58.301262 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 1 08:36:58.302546 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 1 08:36:58.303635 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 1 08:36:58.303662 kernel: ata3.00: applying bridge limits Jul 1 08:36:58.304552 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 1 08:36:58.304570 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 1 08:36:58.305558 kernel: ata3.00: configured for UDMA/100 Jul 1 08:36:58.306557 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 1 08:36:58.372089 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 1 08:36:58.372340 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 1 08:36:58.386556 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 1 08:36:58.706695 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 1 08:36:58.708444 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:36:58.710024 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:36:58.711203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:36:58.714361 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 1 08:36:58.743190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:36:59.214095 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 1 08:36:59.214219 disk-uuid[635]: The operation has completed successfully. Jul 1 08:36:59.255917 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 1 08:36:59.256070 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 1 08:36:59.292744 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 1 08:36:59.320847 sh[665]: Success Jul 1 08:36:59.338919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 1 08:36:59.338974 kernel: device-mapper: uevent: version 1.0.3 Jul 1 08:36:59.340045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 1 08:36:59.349545 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 1 08:36:59.385468 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 1 08:36:59.390249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 1 08:36:59.406941 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 1 08:36:59.411561 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 1 08:36:59.413597 kernel: BTRFS: device fsid aeab36fb-d8a9-440c-a872-a8cce0218739 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (677) Jul 1 08:36:59.413623 kernel: BTRFS info (device dm-0): first mount of filesystem aeab36fb-d8a9-440c-a872-a8cce0218739 Jul 1 08:36:59.415563 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:36:59.415588 kernel: BTRFS info (device dm-0): using free-space-tree Jul 1 08:36:59.420888 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 1 08:36:59.429624 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:36:59.432292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 1 08:36:59.435485 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 1 08:36:59.438363 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 1 08:36:59.472569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Jul 1 08:36:59.472643 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:36:59.472658 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:36:59.474096 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:36:59.484556 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:36:59.485123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 1 08:36:59.488671 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 1 08:36:59.577342 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:36:59.582731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:36:59.928048 ignition[755]: Ignition 2.21.0 Jul 1 08:36:59.928061 ignition[755]: Stage: fetch-offline Jul 1 08:36:59.928102 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:36:59.928111 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:36:59.928206 ignition[755]: parsed url from cmdline: "" Jul 1 08:36:59.928211 ignition[755]: no config URL provided Jul 1 08:36:59.928216 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jul 1 08:36:59.928225 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jul 1 08:36:59.928248 ignition[755]: op(1): [started] loading QEMU firmware config module Jul 1 08:36:59.928257 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 1 08:36:59.938977 ignition[755]: op(1): [finished] loading QEMU firmware config module Jul 1 08:36:59.978024 systemd-networkd[846]: lo: Link UP Jul 1 08:36:59.978037 systemd-networkd[846]: lo: Gained carrier Jul 1 08:36:59.980111 systemd-networkd[846]: Enumeration completed Jul 1 08:36:59.980255 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:36:59.980616 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:36:59.980622 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:36:59.982513 systemd-networkd[846]: eth0: Link UP Jul 1 08:36:59.982518 systemd-networkd[846]: eth0: Gained carrier Jul 1 08:36:59.982543 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:36:59.983804 systemd[1]: Reached target network.target - Network. Jul 1 08:36:59.995587 systemd-networkd[846]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:37:00.006079 ignition[755]: parsing config with SHA512: f2589cddec8b85899582c4d72245898159d07f547bd65de0b996538804100362e8f9e8f66b181eff68af668530a57f017f3586a96cf2cc4958ab3d36176455e7 Jul 1 08:37:00.015076 unknown[755]: fetched base config from "system" Jul 1 08:37:00.015088 unknown[755]: fetched user config from "qemu" Jul 1 08:37:00.015445 ignition[755]: fetch-offline: fetch-offline passed Jul 1 08:37:00.015514 ignition[755]: Ignition finished successfully Jul 1 08:37:00.019318 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:37:00.021138 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 1 08:37:00.022293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 1 08:37:00.077650 ignition[860]: Ignition 2.21.0 Jul 1 08:37:00.077666 ignition[860]: Stage: kargs Jul 1 08:37:00.077931 ignition[860]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:00.077951 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:00.083479 ignition[860]: kargs: kargs passed Jul 1 08:37:00.083606 ignition[860]: Ignition finished successfully Jul 1 08:37:00.089392 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 1 08:37:00.092472 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 1 08:37:00.156999 ignition[868]: Ignition 2.21.0 Jul 1 08:37:00.157022 ignition[868]: Stage: disks Jul 1 08:37:00.157375 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:00.157967 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:00.158966 ignition[868]: disks: disks passed Jul 1 08:37:00.159035 ignition[868]: Ignition finished successfully Jul 1 08:37:00.166000 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 1 08:37:00.168514 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 1 08:37:00.168638 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 1 08:37:00.171215 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:37:00.175226 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:37:00.178822 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:37:00.182439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 1 08:37:00.230712 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 1 08:37:00.238889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 1 08:37:00.240098 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 1 08:37:00.408583 kernel: EXT4-fs (vda9): mounted filesystem 18421243-07cc-41b2-b496-d6a2cef84352 r/w with ordered data mode. Quota mode: none. Jul 1 08:37:00.409609 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 1 08:37:00.411840 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 1 08:37:00.415302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:37:00.417911 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 1 08:37:00.419902 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 1 08:37:00.419952 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 1 08:37:00.419978 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:37:00.433936 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 1 08:37:00.435575 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 1 08:37:00.441568 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Jul 1 08:37:00.441637 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:00.443237 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:00.443254 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:37:00.448287 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:37:00.506681 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Jul 1 08:37:00.513855 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 1 08:37:00.520667 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 1 08:37:00.526840 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 1 08:37:00.651342 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 1 08:37:00.654202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 1 08:37:00.655219 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 1 08:37:00.676610 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 1 08:37:00.682043 kernel: BTRFS info (device vda6): last unmount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:00.697763 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 1 08:37:00.718356 ignition[1000]: INFO : Ignition 2.21.0 Jul 1 08:37:00.718356 ignition[1000]: INFO : Stage: mount Jul 1 08:37:00.722102 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:00.722102 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:00.725723 ignition[1000]: INFO : mount: mount passed Jul 1 08:37:00.726519 ignition[1000]: INFO : Ignition finished successfully Jul 1 08:37:00.728846 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 1 08:37:00.732183 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 1 08:37:00.760277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 1 08:37:00.788553 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Jul 1 08:37:00.791124 kernel: BTRFS info (device vda6): first mount of filesystem 583bafe8-d373-434e-a8d4-4cb362bb932b Jul 1 08:37:00.791171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 1 08:37:00.791182 kernel: BTRFS info (device vda6): using free-space-tree Jul 1 08:37:00.795794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 1 08:37:00.842405 ignition[1029]: INFO : Ignition 2.21.0 Jul 1 08:37:00.842405 ignition[1029]: INFO : Stage: files Jul 1 08:37:00.844284 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:00.844284 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:00.846755 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Jul 1 08:37:00.847952 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 1 08:37:00.847952 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 1 08:37:00.851081 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 1 08:37:00.851081 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 1 08:37:00.851081 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 1 08:37:00.850442 unknown[1029]: wrote ssh authorized keys file for user: core Jul 1 08:37:00.856593 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 1 08:37:00.856593 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 1 08:37:00.901380 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 1 08:37:01.086238 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 1 08:37:01.086238 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 1 08:37:01.090053 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 1 08:37:01.090053 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:37:01.090053 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 1 08:37:01.090053 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:37:01.097109 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 1 08:37:01.097109 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:37:01.100598 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 1 08:37:01.231365 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:37:01.233422 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 1 08:37:01.235199 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 1 08:37:01.461895 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 1 08:37:01.461895 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 1 08:37:01.477650 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 1 08:37:01.742676 systemd-networkd[846]: eth0: Gained IPv6LL Jul 1 08:37:02.029361 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 1 08:37:02.518558 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 1 08:37:02.518558 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 1 08:37:02.522547 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:37:02.634225 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 1 08:37:02.634225 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 1 08:37:02.634225 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 1 08:37:02.639510 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:37:02.639510 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 1 08:37:02.639510 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 1 08:37:02.639510 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 1 08:37:02.672551 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:37:02.680220 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 1 08:37:02.682124 ignition[1029]: INFO : files: files passed Jul 1 08:37:02.682124 ignition[1029]: INFO : Ignition finished successfully Jul 1 08:37:02.695045 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 1 08:37:02.697684 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 1 08:37:02.700215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 1 08:37:02.739241 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 1 08:37:02.739389 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 1 08:37:02.741811 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Jul 1 08:37:02.745727 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:02.745727 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:02.748863 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 1 08:37:02.752257 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:37:02.752624 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 1 08:37:02.757401 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 1 08:37:02.852623 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 1 08:37:02.852778 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 1 08:37:02.855458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 1 08:37:02.857817 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 1 08:37:02.858123 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 1 08:37:02.862735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 1 08:37:02.906321 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:37:02.909891 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 1 08:37:02.945494 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:37:02.945728 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:37:02.948230 systemd[1]: Stopped target timers.target - Timer Units. Jul 1 08:37:02.952428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 1 08:37:02.952615 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 1 08:37:02.956354 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 1 08:37:02.956551 systemd[1]: Stopped target basic.target - Basic System. Jul 1 08:37:02.958587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 1 08:37:02.961199 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 1 08:37:02.962290 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 1 08:37:02.962629 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 1 08:37:02.963098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 1 08:37:02.963421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 1 08:37:02.963922 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 1 08:37:02.964229 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 1 08:37:02.964568 systemd[1]: Stopped target swap.target - Swaps. Jul 1 08:37:02.964998 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 1 08:37:02.965141 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 1 08:37:02.977785 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:37:02.978165 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:37:02.978436 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 1 08:37:02.983786 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:37:02.984088 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 1 08:37:02.984243 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 1 08:37:02.988929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 1 08:37:02.989050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 1 08:37:02.991139 systemd[1]: Stopped target paths.target - Path Units. Jul 1 08:37:02.992124 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 1 08:37:02.997646 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:37:02.999025 systemd[1]: Stopped target slices.target - Slice Units. Jul 1 08:37:03.001348 systemd[1]: Stopped target sockets.target - Socket Units. Jul 1 08:37:03.003954 systemd[1]: iscsid.socket: Deactivated successfully. Jul 1 08:37:03.004078 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 1 08:37:03.005857 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 1 08:37:03.005953 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 1 08:37:03.006817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 1 08:37:03.006948 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 1 08:37:03.009664 systemd[1]: ignition-files.service: Deactivated successfully. Jul 1 08:37:03.009774 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 1 08:37:03.011592 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 1 08:37:03.013268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 1 08:37:03.013387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:37:03.016632 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 1 08:37:03.017605 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 1 08:37:03.017776 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:37:03.020473 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 1 08:37:03.020648 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 1 08:37:03.028405 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 1 08:37:03.046860 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 1 08:37:03.073683 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 1 08:37:03.102832 ignition[1084]: INFO : Ignition 2.21.0 Jul 1 08:37:03.102832 ignition[1084]: INFO : Stage: umount Jul 1 08:37:03.104806 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 1 08:37:03.104806 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 1 08:37:03.104806 ignition[1084]: INFO : umount: umount passed Jul 1 08:37:03.104806 ignition[1084]: INFO : Ignition finished successfully Jul 1 08:37:03.109695 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 1 08:37:03.109834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 1 08:37:03.111961 systemd[1]: Stopped target network.target - Network. Jul 1 08:37:03.113651 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 1 08:37:03.113710 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 1 08:37:03.115651 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 1 08:37:03.115710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 1 08:37:03.116585 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 1 08:37:03.116645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 1 08:37:03.117105 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 1 08:37:03.117150 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 1 08:37:03.117570 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 1 08:37:03.118108 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 1 08:37:03.133570 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 1 08:37:03.133742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 1 08:37:03.138389 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 1 08:37:03.138766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 1 08:37:03.138815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:37:03.144387 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 1 08:37:03.144734 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 1 08:37:03.144897 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 1 08:37:03.148898 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 1 08:37:03.149514 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 1 08:37:03.151304 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 1 08:37:03.151359 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:37:03.155445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 1 08:37:03.155584 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 1 08:37:03.155653 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 1 08:37:03.159374 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 1 08:37:03.159438 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:37:03.162585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 1 08:37:03.162657 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 1 08:37:03.163605 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:37:03.166007 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 1 08:37:03.190353 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 1 08:37:03.200712 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:37:03.201152 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 1 08:37:03.201199 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 1 08:37:03.204231 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 1 08:37:03.204268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:37:03.206385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 1 08:37:03.206438 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 1 08:37:03.209419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 1 08:37:03.209471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 1 08:37:03.211142 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 1 08:37:03.211193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 1 08:37:03.217498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 1 08:37:03.219814 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 1 08:37:03.219866 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:37:03.223236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 1 08:37:03.223286 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:37:03.226992 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 1 08:37:03.227044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:03.231418 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 1 08:37:03.234717 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 1 08:37:03.242834 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 1 08:37:03.242969 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 1 08:37:03.648733 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 1 08:37:03.648881 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 1 08:37:03.651102 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 1 08:37:03.651753 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 1 08:37:03.651822 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 1 08:37:03.655716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 1 08:37:03.679709 systemd[1]: Switching root. Jul 1 08:37:03.755617 systemd-journald[218]: Journal stopped Jul 1 08:37:05.929363 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jul 1 08:37:05.929445 kernel: SELinux: policy capability network_peer_controls=1 Jul 1 08:37:05.929459 kernel: SELinux: policy capability open_perms=1 Jul 1 08:37:05.929471 kernel: SELinux: policy capability extended_socket_class=1 Jul 1 08:37:05.929482 kernel: SELinux: policy capability always_check_network=0 Jul 1 08:37:05.929498 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 1 08:37:05.929510 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 1 08:37:05.929542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 1 08:37:05.929564 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 1 08:37:05.929576 kernel: SELinux: policy capability userspace_initial_context=0 Jul 1 08:37:05.929588 kernel: audit: type=1403 audit(1751359024.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 1 08:37:05.929601 systemd[1]: Successfully loaded SELinux policy in 76.082ms. Jul 1 08:37:05.929630 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.164ms. Jul 1 08:37:05.929644 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 1 08:37:05.929657 systemd[1]: Detected virtualization kvm. Jul 1 08:37:05.929669 systemd[1]: Detected architecture x86-64. Jul 1 08:37:05.929681 systemd[1]: Detected first boot. Jul 1 08:37:05.929700 systemd[1]: Initializing machine ID from VM UUID. Jul 1 08:37:05.929712 zram_generator::config[1131]: No configuration found. Jul 1 08:37:05.929726 kernel: Guest personality initialized and is inactive Jul 1 08:37:05.929738 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 1 08:37:05.929750 kernel: Initialized host personality Jul 1 08:37:05.929761 kernel: NET: Registered PF_VSOCK protocol family Jul 1 08:37:05.929773 systemd[1]: Populated /etc with preset unit settings. Jul 1 08:37:05.929786 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 1 08:37:05.929804 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 1 08:37:05.929816 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 1 08:37:05.929835 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 1 08:37:05.929848 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 1 08:37:05.929860 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 1 08:37:05.929874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 1 08:37:05.929886 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 1 08:37:05.929899 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 1 08:37:05.929915 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 1 08:37:05.929935 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 1 08:37:05.929948 systemd[1]: Created slice user.slice - User and Session Slice. Jul 1 08:37:05.929960 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 1 08:37:05.929973 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 1 08:37:05.929985 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 1 08:37:05.929997 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 1 08:37:05.930010 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 1 08:37:05.930028 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 1 08:37:05.930041 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 1 08:37:05.930053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 1 08:37:05.930066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 1 08:37:05.930078 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 1 08:37:05.930091 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 1 08:37:05.930104 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 1 08:37:05.930116 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 1 08:37:05.930128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 1 08:37:05.930146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 1 08:37:05.930158 systemd[1]: Reached target slices.target - Slice Units. Jul 1 08:37:05.930170 systemd[1]: Reached target swap.target - Swaps. Jul 1 08:37:05.930182 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 1 08:37:05.930196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 1 08:37:05.930208 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 1 08:37:05.930220 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 1 08:37:05.930233 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 1 08:37:05.930245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 1 08:37:05.930257 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 1 08:37:05.930275 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 1 08:37:05.930287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 1 08:37:05.930300 systemd[1]: Mounting media.mount - External Media Directory... Jul 1 08:37:05.930312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:05.930325 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 1 08:37:05.930337 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 1 08:37:05.930349 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 1 08:37:05.930362 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 1 08:37:05.930379 systemd[1]: Reached target machines.target - Containers. Jul 1 08:37:05.930392 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 1 08:37:05.930404 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:05.930417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 1 08:37:05.930429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 1 08:37:05.930441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:05.930453 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:37:05.930467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:05.930484 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 1 08:37:05.930496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:05.930509 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 1 08:37:05.930546 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 1 08:37:05.930559 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 1 08:37:05.930571 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 1 08:37:05.930584 systemd[1]: Stopped systemd-fsck-usr.service. Jul 1 08:37:05.930597 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:05.930609 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 1 08:37:05.930627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 1 08:37:05.930640 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 1 08:37:05.930651 kernel: loop: module loaded Jul 1 08:37:05.930663 kernel: fuse: init (API version 7.41) Jul 1 08:37:05.930674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 1 08:37:05.930687 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 1 08:37:05.930700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 1 08:37:05.930717 systemd[1]: verity-setup.service: Deactivated successfully. Jul 1 08:37:05.930729 systemd[1]: Stopped verity-setup.service. Jul 1 08:37:05.930741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:05.930754 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 1 08:37:05.930772 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 1 08:37:05.930785 systemd[1]: Mounted media.mount - External Media Directory. Jul 1 08:37:05.930807 kernel: ACPI: bus type drm_connector registered Jul 1 08:37:05.930823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 1 08:37:05.930847 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 1 08:37:05.930893 systemd-journald[1206]: Collecting audit messages is disabled. Jul 1 08:37:05.930928 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 1 08:37:05.930948 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 1 08:37:05.930961 systemd-journald[1206]: Journal started Jul 1 08:37:05.930984 systemd-journald[1206]: Runtime Journal (/run/log/journal/56b943c499ab4891aea1d916fac591fc) is 6M, max 48.5M, 42.4M free. Jul 1 08:37:05.654628 systemd[1]: Queued start job for default target multi-user.target. Jul 1 08:37:05.681788 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 1 08:37:05.682354 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 1 08:37:05.933593 systemd[1]: Started systemd-journald.service - Journal Service. Jul 1 08:37:05.935631 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 1 08:37:05.937487 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 1 08:37:05.937763 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 1 08:37:05.939320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:05.939603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:05.941143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:37:05.941370 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:37:05.942754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:05.942987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:05.944555 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 1 08:37:05.944780 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 1 08:37:05.946233 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:05.946456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:05.948109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 1 08:37:05.949854 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 1 08:37:05.951746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 1 08:37:05.953549 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 1 08:37:05.971433 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 1 08:37:05.974787 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 1 08:37:05.977335 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 1 08:37:05.978695 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 1 08:37:05.978808 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 1 08:37:05.981667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 1 08:37:05.992206 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 1 08:37:05.999625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:06.004283 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 1 08:37:06.011670 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 1 08:37:06.013107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:37:06.017862 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 1 08:37:06.019466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:37:06.021642 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 1 08:37:06.025705 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 1 08:37:06.027680 systemd-journald[1206]: Time spent on flushing to /var/log/journal/56b943c499ab4891aea1d916fac591fc is 33.932ms for 1058 entries. Jul 1 08:37:06.027680 systemd-journald[1206]: System Journal (/var/log/journal/56b943c499ab4891aea1d916fac591fc) is 8M, max 195.6M, 187.6M free. Jul 1 08:37:06.107368 systemd-journald[1206]: Received client request to flush runtime journal. Jul 1 08:37:06.107467 kernel: loop0: detected capacity change from 0 to 146336 Jul 1 08:37:06.029787 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 1 08:37:06.045298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 1 08:37:06.047101 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 1 08:37:06.049763 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 1 08:37:06.090188 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 1 08:37:06.108337 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 1 08:37:06.110390 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 1 08:37:06.131617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 1 08:37:06.134625 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 1 08:37:06.158572 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 1 08:37:06.232563 kernel: loop1: detected capacity change from 0 to 221472 Jul 1 08:37:06.263590 kernel: loop2: detected capacity change from 0 to 114000 Jul 1 08:37:06.283211 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 1 08:37:06.287707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 1 08:37:06.289663 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 1 08:37:06.436238 kernel: loop3: detected capacity change from 0 to 146336 Jul 1 08:37:06.441015 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 1 08:37:06.441034 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jul 1 08:37:06.448917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 1 08:37:06.452576 kernel: loop4: detected capacity change from 0 to 221472 Jul 1 08:37:06.464556 kernel: loop5: detected capacity change from 0 to 114000 Jul 1 08:37:06.473275 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 1 08:37:06.474333 (sd-merge)[1271]: Merged extensions into '/usr'. Jul 1 08:37:06.480329 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Jul 1 08:37:06.480437 systemd[1]: Reloading... Jul 1 08:37:06.566604 zram_generator::config[1298]: No configuration found. Jul 1 08:37:06.725881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:06.817347 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 1 08:37:06.818090 systemd[1]: Reloading finished in 337 ms. Jul 1 08:37:06.840387 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 1 08:37:06.841757 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 1 08:37:06.853240 systemd[1]: Starting ensure-sysext.service... Jul 1 08:37:06.855713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 1 08:37:06.871491 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 1 08:37:06.889518 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Jul 1 08:37:06.889557 systemd[1]: Reloading... Jul 1 08:37:06.904851 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 1 08:37:06.904910 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 1 08:37:06.905273 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 1 08:37:06.905896 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 1 08:37:06.907185 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 1 08:37:06.907712 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 1 08:37:06.907919 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jul 1 08:37:06.913998 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:37:06.914020 systemd-tmpfiles[1335]: Skipping /boot Jul 1 08:37:06.930946 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Jul 1 08:37:06.931138 systemd-tmpfiles[1335]: Skipping /boot Jul 1 08:37:06.950561 zram_generator::config[1362]: No configuration found. Jul 1 08:37:07.067473 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:07.163130 systemd[1]: Reloading finished in 273 ms. Jul 1 08:37:07.202457 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 1 08:37:07.204340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 1 08:37:07.213386 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:37:07.216309 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 1 08:37:07.234883 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 1 08:37:07.239100 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 1 08:37:07.243846 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 1 08:37:07.247701 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 1 08:37:07.265700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:07.265897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:07.304146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:07.309835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:07.314844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:07.320400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:07.320647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:07.327613 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 1 08:37:07.328885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:07.330664 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 1 08:37:07.333336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:07.334272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:07.336498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:07.337374 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:07.339613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:07.339877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:07.348536 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Jul 1 08:37:07.362086 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 1 08:37:07.366639 augenrules[1435]: No rules Jul 1 08:37:07.368388 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:37:07.368736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:37:07.385695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:07.387689 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:37:07.389589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 1 08:37:07.392254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 1 08:37:07.395540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 1 08:37:07.405969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 1 08:37:07.411958 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 1 08:37:07.413305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 1 08:37:07.413458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 1 08:37:07.415174 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 1 08:37:07.416387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 1 08:37:07.418419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 1 08:37:07.420089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 1 08:37:07.422453 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 1 08:37:07.424419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 1 08:37:07.424741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 1 08:37:07.434149 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 1 08:37:07.434450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 1 08:37:07.436635 systemd[1]: Finished ensure-sysext.service. Jul 1 08:37:07.438349 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 1 08:37:07.438656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 1 08:37:07.441142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 1 08:37:07.441466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 1 08:37:07.461944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 1 08:37:07.463640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 1 08:37:07.463743 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 1 08:37:07.464442 augenrules[1442]: /sbin/augenrules: No change Jul 1 08:37:07.467862 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 1 08:37:07.469503 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 1 08:37:07.470139 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 1 08:37:07.488956 augenrules[1504]: No rules Jul 1 08:37:07.493136 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:37:07.493517 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:37:07.541181 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 1 08:37:07.579217 systemd-resolved[1405]: Positive Trust Anchors: Jul 1 08:37:07.579234 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 1 08:37:07.579277 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 1 08:37:07.584713 systemd-resolved[1405]: Defaulting to hostname 'linux'. Jul 1 08:37:07.586860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 1 08:37:07.588719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 1 08:37:07.602562 kernel: mousedev: PS/2 mouse device common for all mice Jul 1 08:37:07.646768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 1 08:37:07.663559 kernel: ACPI: button: Power Button [PWRF] Jul 1 08:37:07.665078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 1 08:37:07.668408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 1 08:37:07.700386 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 1 08:37:07.700798 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 1 08:37:07.703185 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 1 08:37:07.718089 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 1 08:37:07.723464 systemd-networkd[1484]: lo: Link UP Jul 1 08:37:07.725562 systemd-networkd[1484]: lo: Gained carrier Jul 1 08:37:07.727914 systemd-networkd[1484]: Enumeration completed Jul 1 08:37:07.728136 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 1 08:37:07.729647 systemd[1]: Reached target network.target - Network. Jul 1 08:37:07.730769 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:07.730794 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 1 08:37:07.733728 systemd-networkd[1484]: eth0: Link UP Jul 1 08:37:07.734019 systemd-networkd[1484]: eth0: Gained carrier Jul 1 08:37:07.734047 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 1 08:37:07.735837 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 1 08:37:07.738919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 1 08:37:07.746612 systemd-networkd[1484]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 1 08:37:07.767013 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 1 08:37:09.020771 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 1 08:37:09.020835 systemd-timesyncd[1492]: Initial clock synchronization to Tue 2025-07-01 08:37:09.020062 UTC. Jul 1 08:37:09.021330 systemd[1]: Reached target sysinit.target - System Initialization. Jul 1 08:37:09.022745 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 1 08:37:09.024572 systemd-resolved[1405]: Clock change detected. Flushing caches. Jul 1 08:37:09.025906 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 1 08:37:09.027404 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 1 08:37:09.028955 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 1 08:37:09.030623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 1 08:37:09.030658 systemd[1]: Reached target paths.target - Path Units. Jul 1 08:37:09.031799 systemd[1]: Reached target time-set.target - System Time Set. Jul 1 08:37:09.035133 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 1 08:37:09.036713 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 1 08:37:09.038244 systemd[1]: Reached target timers.target - Timer Units. Jul 1 08:37:09.040455 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 1 08:37:09.045211 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 1 08:37:09.051750 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 1 08:37:09.053487 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 1 08:37:09.055164 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 1 08:37:09.120450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 1 08:37:09.122603 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 1 08:37:09.126059 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 1 08:37:09.128069 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 1 08:37:09.152149 kernel: kvm_amd: TSC scaling supported Jul 1 08:37:09.152226 kernel: kvm_amd: Nested Virtualization enabled Jul 1 08:37:09.152273 kernel: kvm_amd: Nested Paging enabled Jul 1 08:37:09.153216 kernel: kvm_amd: LBR virtualization supported Jul 1 08:37:09.154802 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 1 08:37:09.154839 kernel: kvm_amd: Virtual GIF supported Jul 1 08:37:09.156413 systemd[1]: Reached target sockets.target - Socket Units. Jul 1 08:37:09.157846 systemd[1]: Reached target basic.target - Basic System. Jul 1 08:37:09.159266 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:37:09.159336 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 1 08:37:09.162834 systemd[1]: Starting containerd.service - containerd container runtime... Jul 1 08:37:09.168956 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 1 08:37:09.171454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 1 08:37:09.180396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 1 08:37:09.183718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 1 08:37:09.185010 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 1 08:37:09.188597 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 1 08:37:09.191483 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 1 08:37:09.196823 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 1 08:37:09.199985 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 1 08:37:09.204760 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 1 08:37:09.207502 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing passwd entry cache Jul 1 08:37:09.207515 oslogin_cache_refresh[1552]: Refreshing passwd entry cache Jul 1 08:37:09.210290 jq[1550]: false Jul 1 08:37:09.216511 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting users, quitting Jul 1 08:37:09.216591 oslogin_cache_refresh[1552]: Failure getting users, quitting Jul 1 08:37:09.216726 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:37:09.216789 oslogin_cache_refresh[1552]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 1 08:37:09.217060 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Refreshing group entry cache Jul 1 08:37:09.217109 oslogin_cache_refresh[1552]: Refreshing group entry cache Jul 1 08:37:09.219946 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 1 08:37:09.223990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 1 08:37:09.227318 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Failure getting groups, quitting Jul 1 08:37:09.227318 google_oslogin_nss_cache[1552]: oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:37:09.225988 oslogin_cache_refresh[1552]: Failure getting groups, quitting Jul 1 08:37:09.226007 oslogin_cache_refresh[1552]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 1 08:37:09.227710 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 1 08:37:09.228492 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 1 08:37:09.229824 extend-filesystems[1551]: Found /dev/vda6 Jul 1 08:37:09.232105 kernel: EDAC MC: Ver: 3.0.0 Jul 1 08:37:09.231639 systemd[1]: Starting update-engine.service - Update Engine... Jul 1 08:37:09.237139 extend-filesystems[1551]: Found /dev/vda9 Jul 1 08:37:09.272640 extend-filesystems[1551]: Checking size of /dev/vda9 Jul 1 08:37:09.246807 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 1 08:37:09.307220 extend-filesystems[1551]: Resized partition /dev/vda9 Jul 1 08:37:09.277499 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 1 08:37:09.308525 jq[1574]: true Jul 1 08:37:09.282441 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 1 08:37:09.308835 extend-filesystems[1590]: resize2fs 1.47.2 (1-Jan-2025) Jul 1 08:37:09.310613 update_engine[1564]: I20250701 08:37:09.298525 1564 main.cc:92] Flatcar Update Engine starting Jul 1 08:37:09.283095 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 1 08:37:09.283552 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 1 08:37:09.284087 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 1 08:37:09.287269 systemd[1]: motdgen.service: Deactivated successfully. Jul 1 08:37:09.287592 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 1 08:37:09.291449 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 1 08:37:09.291802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 1 08:37:09.322748 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 1 08:37:09.324515 jq[1580]: true Jul 1 08:37:09.343855 tar[1579]: linux-amd64/helm Jul 1 08:37:09.350504 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 1 08:37:09.370713 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 1 08:37:09.372221 dbus-daemon[1548]: [system] SELinux support is enabled Jul 1 08:37:09.424596 update_engine[1564]: I20250701 08:37:09.398892 1564 update_check_scheduler.cc:74] Next update check in 2m23s Jul 1 08:37:09.372430 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 1 08:37:09.376536 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 1 08:37:09.376572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 1 08:37:09.378387 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 1 08:37:09.378409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 1 08:37:09.403064 systemd[1]: Started update-engine.service - Update Engine. Jul 1 08:37:09.409910 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 1 08:37:09.425654 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Jul 1 08:37:09.425737 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 1 08:37:09.430259 extend-filesystems[1590]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 1 08:37:09.430259 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 1 08:37:09.430259 extend-filesystems[1590]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 1 08:37:09.429752 systemd-logind[1560]: New seat seat0. Jul 1 08:37:09.437535 extend-filesystems[1551]: Resized filesystem in /dev/vda9 Jul 1 08:37:09.432655 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 1 08:37:09.434809 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 1 08:37:09.450100 systemd[1]: Started systemd-logind.service - User Login Management. Jul 1 08:37:09.451934 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Jul 1 08:37:09.452748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 1 08:37:09.470925 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 1 08:37:09.476810 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 1 08:37:09.557216 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 1 08:37:09.664261 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 1 08:37:09.716834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 1 08:37:09.721302 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 1 08:37:09.746308 systemd[1]: issuegen.service: Deactivated successfully. Jul 1 08:37:09.746714 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 1 08:37:09.775713 containerd[1595]: time="2025-07-01T08:37:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 1 08:37:09.775713 containerd[1595]: time="2025-07-01T08:37:09.774144955Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 1 08:37:09.773144 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 1 08:37:09.790061 containerd[1595]: time="2025-07-01T08:37:09.789987339Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.387µs" Jul 1 08:37:09.790061 containerd[1595]: time="2025-07-01T08:37:09.790043976Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 1 08:37:09.790061 containerd[1595]: time="2025-07-01T08:37:09.790069764Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 1 08:37:09.790363 containerd[1595]: time="2025-07-01T08:37:09.790332617Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 1 08:37:09.790405 containerd[1595]: time="2025-07-01T08:37:09.790363765Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 1 08:37:09.790405 containerd[1595]: time="2025-07-01T08:37:09.790399312Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:37:09.790512 containerd[1595]: time="2025-07-01T08:37:09.790482698Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 1 08:37:09.790512 containerd[1595]: time="2025-07-01T08:37:09.790505100Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791009 containerd[1595]: time="2025-07-01T08:37:09.790979410Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791009 containerd[1595]: time="2025-07-01T08:37:09.791003505Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791063 containerd[1595]: time="2025-07-01T08:37:09.791018854Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791063 containerd[1595]: time="2025-07-01T08:37:09.791030475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791186 containerd[1595]: time="2025-07-01T08:37:09.791153817Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791518 containerd[1595]: time="2025-07-01T08:37:09.791487182Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791556 containerd[1595]: time="2025-07-01T08:37:09.791539259Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 1 08:37:09.791578 containerd[1595]: time="2025-07-01T08:37:09.791554237Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 1 08:37:09.791629 containerd[1595]: time="2025-07-01T08:37:09.791598801Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 1 08:37:09.794144 containerd[1595]: time="2025-07-01T08:37:09.794087587Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 1 08:37:09.794310 containerd[1595]: time="2025-07-01T08:37:09.794278886Z" level=info msg="metadata content store policy set" policy=shared Jul 1 08:37:09.805399 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 1 08:37:09.808994 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 1 08:37:09.813696 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 1 08:37:09.815207 systemd[1]: Reached target getty.target - Login Prompts. Jul 1 08:37:09.858089 containerd[1595]: time="2025-07-01T08:37:09.857980597Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 1 08:37:09.858089 containerd[1595]: time="2025-07-01T08:37:09.858088800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 1 08:37:09.858089 containerd[1595]: time="2025-07-01T08:37:09.858108988Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 1 08:37:09.858518 containerd[1595]: time="2025-07-01T08:37:09.858475625Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 1 08:37:09.858518 containerd[1595]: time="2025-07-01T08:37:09.858517363Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 1 08:37:09.858595 containerd[1595]: time="2025-07-01T08:37:09.858532051Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 1 08:37:09.858595 containerd[1595]: time="2025-07-01T08:37:09.858551918Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 1 08:37:09.858595 containerd[1595]: time="2025-07-01T08:37:09.858567948Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 1 08:37:09.858595 containerd[1595]: time="2025-07-01T08:37:09.858584148Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 1 08:37:09.858595 containerd[1595]: time="2025-07-01T08:37:09.858597253Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 1 08:37:09.858786 containerd[1595]: time="2025-07-01T08:37:09.858610849Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 1 08:37:09.858786 containerd[1595]: time="2025-07-01T08:37:09.858628572Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 1 08:37:09.858975 containerd[1595]: time="2025-07-01T08:37:09.858926571Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 1 08:37:09.858975 containerd[1595]: time="2025-07-01T08:37:09.858966696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.858990009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859007582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859022891Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859037589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859054420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859070991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859087883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859105115Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859122207Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859259314Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859286195Z" level=info msg="Start snapshots syncer" Jul 1 08:37:09.899466 containerd[1595]: time="2025-07-01T08:37:09.859355755Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 1 08:37:09.899844 containerd[1595]: time="2025-07-01T08:37:09.859721661Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 1 08:37:09.899844 containerd[1595]: time="2025-07-01T08:37:09.859804857Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.859896700Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.860115991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863067324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863119873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863158095Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863182370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863203189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863221544Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863257762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863286796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863307855Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863365313Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863385120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 1 08:37:09.900107 containerd[1595]: time="2025-07-01T08:37:09.863399066Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863412411Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863422821Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863443420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863465782Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863495788Z" level=info msg="runtime interface created" Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863503312Z" level=info msg="created NRI interface" Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863516316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863536925Z" level=info msg="Connect containerd service" Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.863571279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 1 08:37:09.900521 containerd[1595]: time="2025-07-01T08:37:09.865064439Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:37:09.962472 tar[1579]: linux-amd64/LICENSE Jul 1 08:37:09.962651 tar[1579]: linux-amd64/README.md Jul 1 08:37:09.989450 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 1 08:37:10.033884 systemd-networkd[1484]: eth0: Gained IPv6LL Jul 1 08:37:10.036957 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 1 08:37:10.039026 systemd[1]: Reached target network-online.target - Network is Online. Jul 1 08:37:10.041854 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 1 08:37:10.044623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:10.050563 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 1 08:37:10.086919 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 1 08:37:10.089925 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 1 08:37:10.090454 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 1 08:37:10.095652 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156603946Z" level=info msg="Start subscribing containerd event" Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156730112Z" level=info msg="Start recovering state" Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156774595Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156845078Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156934545Z" level=info msg="Start event monitor" Jul 1 08:37:10.156992 containerd[1595]: time="2025-07-01T08:37:10.156954703Z" level=info msg="Start cni network conf syncer for default" Jul 1 08:37:10.157333 containerd[1595]: time="2025-07-01T08:37:10.156967157Z" level=info msg="Start streaming server" Jul 1 08:37:10.157333 containerd[1595]: time="2025-07-01T08:37:10.157344704Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 1 08:37:10.157498 containerd[1595]: time="2025-07-01T08:37:10.157358240Z" level=info msg="runtime interface starting up..." Jul 1 08:37:10.157498 containerd[1595]: time="2025-07-01T08:37:10.157368309Z" level=info msg="starting plugins..." Jul 1 08:37:10.157498 containerd[1595]: time="2025-07-01T08:37:10.157393366Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 1 08:37:10.158782 systemd[1]: Started containerd.service - containerd container runtime. Jul 1 08:37:10.159841 containerd[1595]: time="2025-07-01T08:37:10.158974170Z" level=info msg="containerd successfully booted in 0.386667s" Jul 1 08:37:11.449554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:11.452064 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 1 08:37:11.453812 systemd[1]: Startup finished in 3.856s (kernel) + 8.072s (initrd) + 5.514s (userspace) = 17.443s. Jul 1 08:37:11.464709 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:37:11.907360 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 1 08:37:11.908725 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:37702.service - OpenSSH per-connection server daemon (10.0.0.1:37702). Jul 1 08:37:11.994462 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 37702 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:11.998469 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:12.007385 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 1 08:37:12.010203 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 1 08:37:12.021204 systemd-logind[1560]: New session 1 of user core. Jul 1 08:37:12.043994 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 1 08:37:12.048036 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 1 08:37:12.069513 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 1 08:37:12.073485 systemd-logind[1560]: New session c1 of user core. Jul 1 08:37:12.138073 kubelet[1691]: E0701 08:37:12.137878 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:37:12.142824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:37:12.143094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:37:12.143516 systemd[1]: kubelet.service: Consumed 1.853s CPU time, 263.6M memory peak. Jul 1 08:37:12.259787 systemd[1708]: Queued start job for default target default.target. Jul 1 08:37:12.280814 systemd[1708]: Created slice app.slice - User Application Slice. Jul 1 08:37:12.280867 systemd[1708]: Reached target paths.target - Paths. Jul 1 08:37:12.280925 systemd[1708]: Reached target timers.target - Timers. Jul 1 08:37:12.283068 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 1 08:37:12.299212 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 1 08:37:12.299365 systemd[1708]: Reached target sockets.target - Sockets. Jul 1 08:37:12.299420 systemd[1708]: Reached target basic.target - Basic System. Jul 1 08:37:12.299461 systemd[1708]: Reached target default.target - Main User Target. Jul 1 08:37:12.299496 systemd[1708]: Startup finished in 210ms. Jul 1 08:37:12.300022 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 1 08:37:12.302252 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 1 08:37:12.366631 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). Jul 1 08:37:12.449706 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:12.451931 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:12.457578 systemd-logind[1560]: New session 2 of user core. Jul 1 08:37:12.473849 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 1 08:37:12.538186 sshd[1723]: Connection closed by 10.0.0.1 port 37710 Jul 1 08:37:12.538631 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:12.548097 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:37710.service: Deactivated successfully. Jul 1 08:37:12.550631 systemd[1]: session-2.scope: Deactivated successfully. Jul 1 08:37:12.551590 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Jul 1 08:37:12.555058 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:37716.service - OpenSSH per-connection server daemon (10.0.0.1:37716). Jul 1 08:37:12.556605 systemd-logind[1560]: Removed session 2. Jul 1 08:37:12.615136 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 37716 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:12.616800 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:12.621721 systemd-logind[1560]: New session 3 of user core. Jul 1 08:37:12.633011 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 1 08:37:12.684627 sshd[1732]: Connection closed by 10.0.0.1 port 37716 Jul 1 08:37:12.685016 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:12.699300 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:37716.service: Deactivated successfully. Jul 1 08:37:12.701380 systemd[1]: session-3.scope: Deactivated successfully. Jul 1 08:37:12.702469 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Jul 1 08:37:12.705271 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:37720.service - OpenSSH per-connection server daemon (10.0.0.1:37720). Jul 1 08:37:12.706390 systemd-logind[1560]: Removed session 3. Jul 1 08:37:12.765207 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 37720 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:12.767580 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:12.783449 systemd-logind[1560]: New session 4 of user core. Jul 1 08:37:12.796085 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 1 08:37:12.855174 sshd[1741]: Connection closed by 10.0.0.1 port 37720 Jul 1 08:37:12.855776 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:12.871130 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:37720.service: Deactivated successfully. Jul 1 08:37:12.873345 systemd[1]: session-4.scope: Deactivated successfully. Jul 1 08:37:12.874353 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Jul 1 08:37:12.877245 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:37722.service - OpenSSH per-connection server daemon (10.0.0.1:37722). Jul 1 08:37:12.878196 systemd-logind[1560]: Removed session 4. Jul 1 08:37:12.944365 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 37722 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:12.945563 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:12.951605 systemd-logind[1560]: New session 5 of user core. Jul 1 08:37:12.966070 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 1 08:37:13.025188 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 1 08:37:13.025511 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:13.046047 sudo[1751]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:13.047832 sshd[1750]: Connection closed by 10.0.0.1 port 37722 Jul 1 08:37:13.048160 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:13.059386 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:37722.service: Deactivated successfully. Jul 1 08:37:13.061139 systemd[1]: session-5.scope: Deactivated successfully. Jul 1 08:37:13.062006 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jul 1 08:37:13.064639 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:37724.service - OpenSSH per-connection server daemon (10.0.0.1:37724). Jul 1 08:37:13.065424 systemd-logind[1560]: Removed session 5. Jul 1 08:37:13.128491 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 37724 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:13.134657 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:13.140131 systemd-logind[1560]: New session 6 of user core. Jul 1 08:37:13.149874 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 1 08:37:13.204350 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 1 08:37:13.204726 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:13.288632 sudo[1762]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:13.296219 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 1 08:37:13.296564 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:13.308598 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 1 08:37:13.370067 augenrules[1784]: No rules Jul 1 08:37:13.372265 systemd[1]: audit-rules.service: Deactivated successfully. Jul 1 08:37:13.372589 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 1 08:37:13.373961 sudo[1761]: pam_unix(sudo:session): session closed for user root Jul 1 08:37:13.375665 sshd[1760]: Connection closed by 10.0.0.1 port 37724 Jul 1 08:37:13.376102 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 1 08:37:13.394852 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:37724.service: Deactivated successfully. Jul 1 08:37:13.396881 systemd[1]: session-6.scope: Deactivated successfully. Jul 1 08:37:13.397638 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jul 1 08:37:13.400666 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:37730.service - OpenSSH per-connection server daemon (10.0.0.1:37730). Jul 1 08:37:13.401445 systemd-logind[1560]: Removed session 6. Jul 1 08:37:13.476069 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 37730 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:37:13.477816 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:37:13.482922 systemd-logind[1560]: New session 7 of user core. Jul 1 08:37:13.496855 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 1 08:37:13.552468 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 1 08:37:13.552910 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 1 08:37:14.294796 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 1 08:37:14.318113 (dockerd)[1818]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 1 08:37:14.910759 dockerd[1818]: time="2025-07-01T08:37:14.910637627Z" level=info msg="Starting up" Jul 1 08:37:14.911843 dockerd[1818]: time="2025-07-01T08:37:14.911807531Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 1 08:37:15.480516 dockerd[1818]: time="2025-07-01T08:37:15.480405532Z" level=info msg="Loading containers: start." Jul 1 08:37:15.493726 kernel: Initializing XFRM netlink socket Jul 1 08:37:15.934447 systemd-networkd[1484]: docker0: Link UP Jul 1 08:37:16.049654 dockerd[1818]: time="2025-07-01T08:37:16.049572310Z" level=info msg="Loading containers: done." Jul 1 08:37:16.158691 dockerd[1818]: time="2025-07-01T08:37:16.158571914Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 1 08:37:16.158905 dockerd[1818]: time="2025-07-01T08:37:16.158775796Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 1 08:37:16.158947 dockerd[1818]: time="2025-07-01T08:37:16.158921890Z" level=info msg="Initializing buildkit" Jul 1 08:37:16.455704 dockerd[1818]: time="2025-07-01T08:37:16.455616492Z" level=info msg="Completed buildkit initialization" Jul 1 08:37:16.460004 dockerd[1818]: time="2025-07-01T08:37:16.459967871Z" level=info msg="Daemon has completed initialization" Jul 1 08:37:16.460078 dockerd[1818]: time="2025-07-01T08:37:16.460034626Z" level=info msg="API listen on /run/docker.sock" Jul 1 08:37:16.460209 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 1 08:37:17.614563 containerd[1595]: time="2025-07-01T08:37:17.614473088Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 1 08:37:19.917142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2132731090.mount: Deactivated successfully. Jul 1 08:37:21.261355 containerd[1595]: time="2025-07-01T08:37:21.261261225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:21.359314 containerd[1595]: time="2025-07-01T08:37:21.359218856Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 1 08:37:21.402858 containerd[1595]: time="2025-07-01T08:37:21.402768942Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:21.407735 containerd[1595]: time="2025-07-01T08:37:21.407691892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:21.409193 containerd[1595]: time="2025-07-01T08:37:21.409126322Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 3.794584084s" Jul 1 08:37:21.409193 containerd[1595]: time="2025-07-01T08:37:21.409173861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 1 08:37:21.410059 containerd[1595]: time="2025-07-01T08:37:21.409981826Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 1 08:37:22.393523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 1 08:37:22.395429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:22.658116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:22.675084 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:37:23.212063 kubelet[2090]: E0701 08:37:23.211929 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:37:23.221079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:37:23.221308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:37:23.221780 systemd[1]: kubelet.service: Consumed 281ms CPU time, 109.9M memory peak. Jul 1 08:37:23.662309 containerd[1595]: time="2025-07-01T08:37:23.662125585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:23.663107 containerd[1595]: time="2025-07-01T08:37:23.663065247Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 1 08:37:23.664197 containerd[1595]: time="2025-07-01T08:37:23.664167353Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:23.667383 containerd[1595]: time="2025-07-01T08:37:23.667313743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:23.668388 containerd[1595]: time="2025-07-01T08:37:23.668350126Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.258337032s" Jul 1 08:37:23.668388 containerd[1595]: time="2025-07-01T08:37:23.668383749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 1 08:37:23.669065 containerd[1595]: time="2025-07-01T08:37:23.669015884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 1 08:37:25.064695 containerd[1595]: time="2025-07-01T08:37:25.064593342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:25.066171 containerd[1595]: time="2025-07-01T08:37:25.066138349Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 1 08:37:25.069957 containerd[1595]: time="2025-07-01T08:37:25.069898739Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:25.072548 containerd[1595]: time="2025-07-01T08:37:25.072509484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:25.073656 containerd[1595]: time="2025-07-01T08:37:25.073610309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.404513352s" Jul 1 08:37:25.073656 containerd[1595]: time="2025-07-01T08:37:25.073647458Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 1 08:37:25.074248 containerd[1595]: time="2025-07-01T08:37:25.074214071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 1 08:37:27.539776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074128990.mount: Deactivated successfully. Jul 1 08:37:28.185095 containerd[1595]: time="2025-07-01T08:37:28.185010152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:28.185863 containerd[1595]: time="2025-07-01T08:37:28.185809982Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 1 08:37:28.187159 containerd[1595]: time="2025-07-01T08:37:28.187113105Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:28.189093 containerd[1595]: time="2025-07-01T08:37:28.189043805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:28.190044 containerd[1595]: time="2025-07-01T08:37:28.189986283Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 3.115735614s" Jul 1 08:37:28.190104 containerd[1595]: time="2025-07-01T08:37:28.190042728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 1 08:37:28.190990 containerd[1595]: time="2025-07-01T08:37:28.190944329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 1 08:37:29.669519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990655154.mount: Deactivated successfully. Jul 1 08:37:31.306361 containerd[1595]: time="2025-07-01T08:37:31.306262710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:31.308402 containerd[1595]: time="2025-07-01T08:37:31.308339254Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 1 08:37:31.314015 containerd[1595]: time="2025-07-01T08:37:31.313971664Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:31.317690 containerd[1595]: time="2025-07-01T08:37:31.317615557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:31.318911 containerd[1595]: time="2025-07-01T08:37:31.318859048Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.127874935s" Jul 1 08:37:31.318911 containerd[1595]: time="2025-07-01T08:37:31.318892431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 1 08:37:31.319824 containerd[1595]: time="2025-07-01T08:37:31.319773062Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 1 08:37:31.944314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640024970.mount: Deactivated successfully. Jul 1 08:37:31.952705 containerd[1595]: time="2025-07-01T08:37:31.952628416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:37:31.954908 containerd[1595]: time="2025-07-01T08:37:31.954824403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 1 08:37:31.955973 containerd[1595]: time="2025-07-01T08:37:31.955925408Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:37:31.958286 containerd[1595]: time="2025-07-01T08:37:31.958234467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 1 08:37:31.958931 containerd[1595]: time="2025-07-01T08:37:31.958888904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.082069ms" Jul 1 08:37:31.958931 containerd[1595]: time="2025-07-01T08:37:31.958925252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 1 08:37:31.959554 containerd[1595]: time="2025-07-01T08:37:31.959516220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 1 08:37:32.513869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108922524.mount: Deactivated successfully. Jul 1 08:37:33.471959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 1 08:37:33.473972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:33.743954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:33.758027 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 1 08:37:34.108370 kubelet[2191]: E0701 08:37:34.108142 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 1 08:37:34.112436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 1 08:37:34.112699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 1 08:37:34.113150 systemd[1]: kubelet.service: Consumed 266ms CPU time, 108.9M memory peak. Jul 1 08:37:36.864547 containerd[1595]: time="2025-07-01T08:37:36.864469807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:36.865593 containerd[1595]: time="2025-07-01T08:37:36.865548409Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 1 08:37:36.867032 containerd[1595]: time="2025-07-01T08:37:36.866982118Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:36.869868 containerd[1595]: time="2025-07-01T08:37:36.869808607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:36.870832 containerd[1595]: time="2025-07-01T08:37:36.870796720Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.911253409s" Jul 1 08:37:36.870832 containerd[1595]: time="2025-07-01T08:37:36.870830543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 1 08:37:39.611616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:39.611809 systemd[1]: kubelet.service: Consumed 266ms CPU time, 108.9M memory peak. Jul 1 08:37:39.614099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:39.641191 systemd[1]: Reload requested from client PID 2272 ('systemctl') (unit session-7.scope)... Jul 1 08:37:39.641212 systemd[1]: Reloading... Jul 1 08:37:39.763927 zram_generator::config[2315]: No configuration found. Jul 1 08:37:40.417421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:40.540886 systemd[1]: Reloading finished in 899 ms. Jul 1 08:37:40.620817 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 1 08:37:40.620934 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 1 08:37:40.621299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:40.621451 systemd[1]: kubelet.service: Consumed 167ms CPU time, 98.3M memory peak. Jul 1 08:37:40.623448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:40.991839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:41.006021 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:37:41.040695 kubelet[2363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:37:41.040695 kubelet[2363]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 1 08:37:41.040695 kubelet[2363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:37:41.041145 kubelet[2363]: I0701 08:37:41.040766 2363 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:37:41.738126 kubelet[2363]: I0701 08:37:41.738060 2363 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 1 08:37:41.738126 kubelet[2363]: I0701 08:37:41.738100 2363 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:37:41.738452 kubelet[2363]: I0701 08:37:41.738421 2363 server.go:934] "Client rotation is on, will bootstrap in background" Jul 1 08:37:41.834279 kubelet[2363]: E0701 08:37:41.834211 2363 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:37:41.837320 kubelet[2363]: I0701 08:37:41.837261 2363 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:37:41.844771 kubelet[2363]: I0701 08:37:41.844738 2363 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:37:41.854429 kubelet[2363]: I0701 08:37:41.854376 2363 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:37:41.854610 kubelet[2363]: I0701 08:37:41.854574 2363 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 1 08:37:41.854976 kubelet[2363]: I0701 08:37:41.854800 2363 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:37:41.855153 kubelet[2363]: I0701 08:37:41.854838 2363 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:37:41.855381 kubelet[2363]: I0701 08:37:41.855161 2363 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:37:41.855381 kubelet[2363]: I0701 08:37:41.855175 2363 container_manager_linux.go:300] "Creating device plugin manager" Jul 1 08:37:41.855381 kubelet[2363]: I0701 08:37:41.855352 2363 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:37:41.857866 kubelet[2363]: I0701 08:37:41.857828 2363 kubelet.go:408] "Attempting to sync node with API server" Jul 1 08:37:41.857866 kubelet[2363]: I0701 08:37:41.857853 2363 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:37:41.857958 kubelet[2363]: I0701 08:37:41.857904 2363 kubelet.go:314] "Adding apiserver pod source" Jul 1 08:37:41.857958 kubelet[2363]: I0701 08:37:41.857952 2363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:37:41.861992 kubelet[2363]: I0701 08:37:41.861954 2363 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:37:41.863975 kubelet[2363]: I0701 08:37:41.863944 2363 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 1 08:37:41.864143 kubelet[2363]: W0701 08:37:41.864096 2363 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 1 08:37:42.895329 kubelet[2363]: W0701 08:37:42.895231 2363 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jul 1 08:37:42.895329 kubelet[2363]: W0701 08:37:42.895287 2363 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jul 1 08:37:42.895329 kubelet[2363]: E0701 08:37:42.895323 2363 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:37:42.895329 kubelet[2363]: E0701 08:37:42.895338 2363 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:37:42.896397 kubelet[2363]: I0701 08:37:42.896188 2363 server.go:1274] "Started kubelet" Jul 1 08:37:42.896751 kubelet[2363]: I0701 08:37:42.896713 2363 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:37:42.897221 kubelet[2363]: I0701 08:37:42.897169 2363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:37:42.897964 kubelet[2363]: I0701 08:37:42.897737 2363 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:37:42.897964 kubelet[2363]: I0701 08:37:42.897808 2363 server.go:449] "Adding debug handlers to kubelet server" Jul 1 08:37:42.900176 kubelet[2363]: I0701 08:37:42.899855 2363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:37:42.900176 kubelet[2363]: I0701 08:37:42.900068 2363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:37:42.901612 kubelet[2363]: I0701 08:37:42.901591 2363 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 1 08:37:42.901842 kubelet[2363]: I0701 08:37:42.901822 2363 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 1 08:37:42.902042 kubelet[2363]: I0701 08:37:42.901985 2363 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:37:42.902391 kubelet[2363]: W0701 08:37:42.902345 2363 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jul 1 08:37:42.902454 kubelet[2363]: E0701 08:37:42.902395 2363 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:37:42.903497 kubelet[2363]: I0701 08:37:42.902541 2363 factory.go:221] Registration of the systemd container factory successfully Jul 1 08:37:42.903497 kubelet[2363]: I0701 08:37:42.902632 2363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:37:42.903497 kubelet[2363]: E0701 08:37:42.903163 2363 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:37:42.903615 kubelet[2363]: E0701 08:37:42.903524 2363 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:37:42.903615 kubelet[2363]: E0701 08:37:42.903591 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="200ms" Jul 1 08:37:42.903809 kubelet[2363]: I0701 08:37:42.903662 2363 factory.go:221] Registration of the containerd container factory successfully Jul 1 08:37:42.904891 kubelet[2363]: E0701 08:37:42.903369 2363 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184e13c9e215d7a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-01 08:37:42.896138149 +0000 UTC m=+1.886195813,LastTimestamp:2025-07-01 08:37:42.896138149 +0000 UTC m=+1.886195813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 1 08:37:42.918976 kubelet[2363]: I0701 08:37:42.918922 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 08:37:42.920610 kubelet[2363]: I0701 08:37:42.920583 2363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 08:37:42.920668 kubelet[2363]: I0701 08:37:42.920620 2363 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 1 08:37:42.920668 kubelet[2363]: I0701 08:37:42.920644 2363 kubelet.go:2321] "Starting kubelet main sync loop" Jul 1 08:37:42.920780 kubelet[2363]: E0701 08:37:42.920707 2363 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:37:42.921246 kubelet[2363]: I0701 08:37:42.921218 2363 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 1 08:37:42.921246 kubelet[2363]: I0701 08:37:42.921240 2363 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 1 08:37:42.921363 kubelet[2363]: I0701 08:37:42.921260 2363 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:37:42.921482 kubelet[2363]: W0701 08:37:42.921426 2363 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.80:6443: connect: connection refused Jul 1 08:37:42.921534 kubelet[2363]: E0701 08:37:42.921492 2363 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.80:6443: connect: connection refused" logger="UnhandledError" Jul 1 08:37:42.927080 kubelet[2363]: I0701 08:37:42.927050 2363 policy_none.go:49] "None policy: Start" Jul 1 08:37:42.927995 kubelet[2363]: I0701 08:37:42.927971 2363 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 1 08:37:42.927995 kubelet[2363]: I0701 08:37:42.927995 2363 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:37:42.936823 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 1 08:37:42.950896 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 1 08:37:42.954387 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 1 08:37:42.974750 kubelet[2363]: I0701 08:37:42.974716 2363 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 08:37:42.975048 kubelet[2363]: I0701 08:37:42.974967 2363 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:37:42.975048 kubelet[2363]: I0701 08:37:42.974982 2363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:37:42.975347 kubelet[2363]: I0701 08:37:42.975309 2363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:37:42.976804 kubelet[2363]: E0701 08:37:42.976772 2363 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 1 08:37:43.030979 systemd[1]: Created slice kubepods-burstable-pod0b05222695ddc5771c7ff7f344114108.slice - libcontainer container kubepods-burstable-pod0b05222695ddc5771c7ff7f344114108.slice. Jul 1 08:37:43.053505 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 1 08:37:43.058161 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 1 08:37:43.077233 kubelet[2363]: I0701 08:37:43.077176 2363 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 1 08:37:43.077634 kubelet[2363]: E0701 08:37:43.077604 2363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jul 1 08:37:43.103849 kubelet[2363]: I0701 08:37:43.103814 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:43.104168 kubelet[2363]: E0701 08:37:43.104114 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="400ms" Jul 1 08:37:43.204888 kubelet[2363]: I0701 08:37:43.204706 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:43.204888 kubelet[2363]: I0701 08:37:43.204774 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:43.204888 kubelet[2363]: I0701 08:37:43.204805 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:43.204888 kubelet[2363]: I0701 08:37:43.204824 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:43.204888 kubelet[2363]: I0701 08:37:43.204839 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:43.205200 kubelet[2363]: I0701 08:37:43.204856 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:43.205200 kubelet[2363]: I0701 08:37:43.204870 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:43.205200 kubelet[2363]: I0701 08:37:43.204887 2363 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:37:43.279395 kubelet[2363]: I0701 08:37:43.279325 2363 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 1 08:37:43.279880 kubelet[2363]: E0701 08:37:43.279835 2363 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.80:6443/api/v1/nodes\": dial tcp 10.0.0.80:6443: connect: connection refused" node="localhost" Jul 1 08:37:43.351976 containerd[1595]: time="2025-07-01T08:37:43.351908798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b05222695ddc5771c7ff7f344114108,Namespace:kube-system,Attempt:0,}" Jul 1 08:37:43.356893 containerd[1595]: time="2025-07-01T08:37:43.356829218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 1 08:37:43.361606 containerd[1595]: time="2025-07-01T08:37:43.361551239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 1 08:37:43.391664 containerd[1595]: time="2025-07-01T08:37:43.391293639Z" level=info msg="connecting to shim adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568" address="unix:///run/containerd/s/bcf3f2dd1611da365b74c1def4b03410483d6155bc596f58e4c20c7ea6d4653b" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:37:43.397523 containerd[1595]: time="2025-07-01T08:37:43.397454012Z" level=info msg="connecting to shim d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2" address="unix:///run/containerd/s/145f11b318a55047b729a14b0e07115c7cb1709ebf0bc61540823be967b5500f" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:37:43.416299 containerd[1595]: time="2025-07-01T08:37:43.415847603Z" level=info msg="connecting to shim 39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511" address="unix:///run/containerd/s/07cec0fc5e9f9b9218a1655c8fed5043cb0c7cba62585b54746d1aa0df279942" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:37:43.429914 systemd[1]: Started cri-containerd-d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2.scope - libcontainer container d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2. Jul 1 08:37:43.434102 systemd[1]: Started cri-containerd-adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568.scope - libcontainer container adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568. Jul 1 08:37:43.439841 systemd[1]: Started cri-containerd-39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511.scope - libcontainer container 39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511. Jul 1 08:37:43.484193 containerd[1595]: time="2025-07-01T08:37:43.484037596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2\"" Jul 1 08:37:43.489156 containerd[1595]: time="2025-07-01T08:37:43.489103907Z" level=info msg="CreateContainer within sandbox \"d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 1 08:37:43.490965 containerd[1595]: time="2025-07-01T08:37:43.490935425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0b05222695ddc5771c7ff7f344114108,Namespace:kube-system,Attempt:0,} returns sandbox id \"adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568\"" Jul 1 08:37:43.494427 containerd[1595]: time="2025-07-01T08:37:43.494393507Z" level=info msg="CreateContainer within sandbox \"adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 1 08:37:43.499527 containerd[1595]: time="2025-07-01T08:37:43.499481381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511\"" Jul 1 08:37:43.501406 containerd[1595]: time="2025-07-01T08:37:43.501294964Z" level=info msg="CreateContainer within sandbox \"39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 1 08:37:43.503592 containerd[1595]: time="2025-07-01T08:37:43.503565375Z" level=info msg="Container d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:37:43.505525 kubelet[2363]: E0701 08:37:43.505452 2363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.80:6443: connect: connection refused" interval="800ms" Jul 1 08:37:43.512590 containerd[1595]: time="2025-07-01T08:37:43.512373645Z" level=info msg="Container 9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:37:43.516167 containerd[1595]: time="2025-07-01T08:37:43.516140891Z" level=info msg="CreateContainer within sandbox \"d013d8a1225b42c9123b41dab0666756450815b5d10bba43643449c27ba709d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75\"" Jul 1 08:37:43.516354 containerd[1595]: time="2025-07-01T08:37:43.516200726Z" level=info msg="Container 139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:37:43.516984 containerd[1595]: time="2025-07-01T08:37:43.516936790Z" level=info msg="StartContainer for \"d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75\"" Jul 1 08:37:43.518331 containerd[1595]: time="2025-07-01T08:37:43.518288917Z" level=info msg="connecting to shim d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75" address="unix:///run/containerd/s/145f11b318a55047b729a14b0e07115c7cb1709ebf0bc61540823be967b5500f" protocol=ttrpc version=3 Jul 1 08:37:43.523389 containerd[1595]: time="2025-07-01T08:37:43.523299782Z" level=info msg="CreateContainer within sandbox \"adddca1bb45e8e25ef885aa95f34e1b84861bbb9de56d6bf596c4c9bfa3ec568\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9\"" Jul 1 08:37:43.525069 containerd[1595]: time="2025-07-01T08:37:43.525007893Z" level=info msg="StartContainer for \"9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9\"" Jul 1 08:37:43.526439 containerd[1595]: time="2025-07-01T08:37:43.526382202Z" level=info msg="connecting to shim 9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9" address="unix:///run/containerd/s/bcf3f2dd1611da365b74c1def4b03410483d6155bc596f58e4c20c7ea6d4653b" protocol=ttrpc version=3 Jul 1 08:37:43.529443 containerd[1595]: time="2025-07-01T08:37:43.529053344Z" level=info msg="CreateContainer within sandbox \"39f42a87da7f8d60c6c5c600ba0d9873f4cb75c5a22efac1f4d9740437fcd511\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443\"" Jul 1 08:37:43.529786 containerd[1595]: time="2025-07-01T08:37:43.529765592Z" level=info msg="StartContainer for \"139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443\"" Jul 1 08:37:43.531140 containerd[1595]: time="2025-07-01T08:37:43.531117639Z" level=info msg="connecting to shim 139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443" address="unix:///run/containerd/s/07cec0fc5e9f9b9218a1655c8fed5043cb0c7cba62585b54746d1aa0df279942" protocol=ttrpc version=3 Jul 1 08:37:43.542868 systemd[1]: Started cri-containerd-d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75.scope - libcontainer container d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75. Jul 1 08:37:43.546933 systemd[1]: Started cri-containerd-9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9.scope - libcontainer container 9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9. Jul 1 08:37:43.561864 systemd[1]: Started cri-containerd-139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443.scope - libcontainer container 139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443. Jul 1 08:37:43.637371 containerd[1595]: time="2025-07-01T08:37:43.637265702Z" level=info msg="StartContainer for \"139328f8dddaa2e603850919b4d2d1bfb39627a432936732a578b794fe867443\" returns successfully" Jul 1 08:37:43.638883 containerd[1595]: time="2025-07-01T08:37:43.638686873Z" level=info msg="StartContainer for \"d24be20af28c5f7c8572f28e2e50d8dcb42bee2c848a99d3a787d1930e32fb75\" returns successfully" Jul 1 08:37:43.640073 containerd[1595]: time="2025-07-01T08:37:43.640013281Z" level=info msg="StartContainer for \"9a7670b2183a71ebdb681407d192c7a12da6cfeba56fe4b1bb556fdfd43be7e9\" returns successfully" Jul 1 08:37:43.682869 kubelet[2363]: I0701 08:37:43.682819 2363 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 1 08:37:45.574137 kubelet[2363]: E0701 08:37:45.572119 2363 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 1 08:37:45.787702 kubelet[2363]: I0701 08:37:45.786820 2363 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 1 08:37:45.896804 kubelet[2363]: I0701 08:37:45.896636 2363 apiserver.go:52] "Watching apiserver" Jul 1 08:37:45.903058 kubelet[2363]: I0701 08:37:45.903020 2363 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 1 08:37:47.940665 systemd[1]: Reload requested from client PID 2636 ('systemctl') (unit session-7.scope)... Jul 1 08:37:47.940697 systemd[1]: Reloading... Jul 1 08:37:48.027720 zram_generator::config[2679]: No configuration found. Jul 1 08:37:48.174135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 1 08:37:48.327302 systemd[1]: Reloading finished in 386 ms. Jul 1 08:37:48.360970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:48.380471 systemd[1]: kubelet.service: Deactivated successfully. Jul 1 08:37:48.380859 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:48.380920 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 133.8M memory peak. Jul 1 08:37:48.383018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 1 08:37:48.628146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 1 08:37:48.639346 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 1 08:37:48.701416 kubelet[2724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:37:48.701416 kubelet[2724]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 1 08:37:48.701416 kubelet[2724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 1 08:37:48.701897 kubelet[2724]: I0701 08:37:48.701474 2724 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 1 08:37:48.707411 kubelet[2724]: I0701 08:37:48.707379 2724 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 1 08:37:48.707411 kubelet[2724]: I0701 08:37:48.707396 2724 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 1 08:37:48.707633 kubelet[2724]: I0701 08:37:48.707583 2724 server.go:934] "Client rotation is on, will bootstrap in background" Jul 1 08:37:48.708812 kubelet[2724]: I0701 08:37:48.708788 2724 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 1 08:37:48.710499 kubelet[2724]: I0701 08:37:48.710451 2724 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 1 08:37:48.715388 kubelet[2724]: I0701 08:37:48.715353 2724 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 1 08:37:48.720002 kubelet[2724]: I0701 08:37:48.719967 2724 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 1 08:37:48.720152 kubelet[2724]: I0701 08:37:48.720125 2724 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 1 08:37:48.720309 kubelet[2724]: I0701 08:37:48.720260 2724 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 1 08:37:48.720494 kubelet[2724]: I0701 08:37:48.720291 2724 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 1 08:37:48.720579 kubelet[2724]: I0701 08:37:48.720503 2724 topology_manager.go:138] "Creating topology manager with none policy" Jul 1 08:37:48.720579 kubelet[2724]: I0701 08:37:48.720512 2724 container_manager_linux.go:300] "Creating device plugin manager" Jul 1 08:37:48.720579 kubelet[2724]: I0701 08:37:48.720543 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:37:48.720711 kubelet[2724]: I0701 08:37:48.720689 2724 kubelet.go:408] "Attempting to sync node with API server" Jul 1 08:37:48.720711 kubelet[2724]: I0701 08:37:48.720708 2724 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 1 08:37:48.720770 kubelet[2724]: I0701 08:37:48.720745 2724 kubelet.go:314] "Adding apiserver pod source" Jul 1 08:37:48.720770 kubelet[2724]: I0701 08:37:48.720760 2724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 1 08:37:48.722693 kubelet[2724]: I0701 08:37:48.721187 2724 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 1 08:37:48.722693 kubelet[2724]: I0701 08:37:48.721553 2724 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 1 08:37:48.722693 kubelet[2724]: I0701 08:37:48.722175 2724 server.go:1274] "Started kubelet" Jul 1 08:37:48.722805 kubelet[2724]: I0701 08:37:48.722732 2724 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 1 08:37:48.722997 kubelet[2724]: I0701 08:37:48.722967 2724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 1 08:37:48.723413 kubelet[2724]: I0701 08:37:48.723379 2724 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 1 08:37:48.723615 kubelet[2724]: I0701 08:37:48.723588 2724 server.go:449] "Adding debug handlers to kubelet server" Jul 1 08:37:48.726651 kubelet[2724]: I0701 08:37:48.726600 2724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 1 08:37:48.727474 kubelet[2724]: I0701 08:37:48.727442 2724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 1 08:37:48.731496 kubelet[2724]: I0701 08:37:48.731454 2724 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 1 08:37:48.731854 kubelet[2724]: E0701 08:37:48.731823 2724 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 1 08:37:48.732200 kubelet[2724]: I0701 08:37:48.732179 2724 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 1 08:37:48.732553 kubelet[2724]: I0701 08:37:48.732514 2724 reconciler.go:26] "Reconciler: start to sync state" Jul 1 08:37:48.736003 kubelet[2724]: I0701 08:37:48.735964 2724 factory.go:221] Registration of the systemd container factory successfully Jul 1 08:37:48.736172 kubelet[2724]: I0701 08:37:48.736057 2724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 1 08:37:48.739257 kubelet[2724]: I0701 08:37:48.739218 2724 factory.go:221] Registration of the containerd container factory successfully Jul 1 08:37:48.744237 kubelet[2724]: E0701 08:37:48.743777 2724 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 1 08:37:48.744365 kubelet[2724]: I0701 08:37:48.744339 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 1 08:37:48.745667 kubelet[2724]: I0701 08:37:48.745644 2724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 1 08:37:48.745667 kubelet[2724]: I0701 08:37:48.745665 2724 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 1 08:37:48.745667 kubelet[2724]: I0701 08:37:48.745696 2724 kubelet.go:2321] "Starting kubelet main sync loop" Jul 1 08:37:48.745820 kubelet[2724]: E0701 08:37:48.745736 2724 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 1 08:37:48.777076 kubelet[2724]: I0701 08:37:48.777049 2724 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 1 08:37:48.777267 kubelet[2724]: I0701 08:37:48.777226 2724 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 1 08:37:48.777267 kubelet[2724]: I0701 08:37:48.777251 2724 state_mem.go:36] "Initialized new in-memory state store" Jul 1 08:37:48.777463 kubelet[2724]: I0701 08:37:48.777391 2724 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 1 08:37:48.777463 kubelet[2724]: I0701 08:37:48.777401 2724 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 1 08:37:48.777463 kubelet[2724]: I0701 08:37:48.777418 2724 policy_none.go:49] "None policy: Start" Jul 1 08:37:48.778004 kubelet[2724]: I0701 08:37:48.777983 2724 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 1 08:37:48.778004 kubelet[2724]: I0701 08:37:48.778005 2724 state_mem.go:35] "Initializing new in-memory state store" Jul 1 08:37:48.778151 kubelet[2724]: I0701 08:37:48.778135 2724 state_mem.go:75] "Updated machine memory state" Jul 1 08:37:48.782582 kubelet[2724]: I0701 08:37:48.782550 2724 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 1 08:37:48.782818 kubelet[2724]: I0701 08:37:48.782769 2724 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 1 08:37:48.782818 kubelet[2724]: I0701 08:37:48.782787 2724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 1 08:37:48.783023 kubelet[2724]: I0701 08:37:48.782980 2724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 1 08:37:48.885487 kubelet[2724]: I0701 08:37:48.885358 2724 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 1 08:37:48.932925 kubelet[2724]: I0701 08:37:48.932876 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 1 08:37:48.933174 kubelet[2724]: I0701 08:37:48.932908 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:48.933174 kubelet[2724]: I0701 08:37:48.932975 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:48.933174 kubelet[2724]: I0701 08:37:48.932997 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:48.933174 kubelet[2724]: I0701 08:37:48.933014 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:48.933174 kubelet[2724]: I0701 08:37:48.933031 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:48.933355 kubelet[2724]: I0701 08:37:48.933069 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b05222695ddc5771c7ff7f344114108-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0b05222695ddc5771c7ff7f344114108\") " pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:48.933355 kubelet[2724]: I0701 08:37:48.933112 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:48.933355 kubelet[2724]: I0701 08:37:48.933129 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 1 08:37:49.006183 kubelet[2724]: I0701 08:37:49.006124 2724 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 1 08:37:49.007440 kubelet[2724]: I0701 08:37:49.006252 2724 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 1 08:37:49.750235 kubelet[2724]: I0701 08:37:49.750178 2724 apiserver.go:52] "Watching apiserver" Jul 1 08:37:49.764931 kubelet[2724]: E0701 08:37:49.764880 2724 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 1 08:37:49.781564 kubelet[2724]: I0701 08:37:49.781489 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.781439164 podStartE2EDuration="1.781439164s" podCreationTimestamp="2025-07-01 08:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:37:49.781250344 +0000 UTC m=+1.114481135" watchObservedRunningTime="2025-07-01 08:37:49.781439164 +0000 UTC m=+1.114669945" Jul 1 08:37:49.788527 kubelet[2724]: I0701 08:37:49.788458 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.788438094 podStartE2EDuration="1.788438094s" podCreationTimestamp="2025-07-01 08:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:37:49.788253061 +0000 UTC m=+1.121483852" watchObservedRunningTime="2025-07-01 08:37:49.788438094 +0000 UTC m=+1.121668875" Jul 1 08:37:49.796046 kubelet[2724]: I0701 08:37:49.795979 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.795956836 podStartE2EDuration="1.795956836s" podCreationTimestamp="2025-07-01 08:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:37:49.795809605 +0000 UTC m=+1.129040397" watchObservedRunningTime="2025-07-01 08:37:49.795956836 +0000 UTC m=+1.129187628" Jul 1 08:37:49.832903 kubelet[2724]: I0701 08:37:49.832852 2724 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 1 08:37:54.775338 kubelet[2724]: I0701 08:37:54.775287 2724 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 1 08:37:54.776054 kubelet[2724]: I0701 08:37:54.775847 2724 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 1 08:37:54.776119 containerd[1595]: time="2025-07-01T08:37:54.775628301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 1 08:37:55.017138 update_engine[1564]: I20250701 08:37:55.016988 1564 update_attempter.cc:509] Updating boot flags... Jul 1 08:37:55.426980 systemd[1]: Created slice kubepods-besteffort-podbb3181f7_c887_4677_beb3_ff5456cb52a7.slice - libcontainer container kubepods-besteffort-podbb3181f7_c887_4677_beb3_ff5456cb52a7.slice. Jul 1 08:37:55.484580 kubelet[2724]: I0701 08:37:55.484476 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksdh\" (UniqueName: \"kubernetes.io/projected/bb3181f7-c887-4677-beb3-ff5456cb52a7-kube-api-access-gksdh\") pod \"kube-proxy-m9rs5\" (UID: \"bb3181f7-c887-4677-beb3-ff5456cb52a7\") " pod="kube-system/kube-proxy-m9rs5" Jul 1 08:37:55.484580 kubelet[2724]: I0701 08:37:55.484552 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb3181f7-c887-4677-beb3-ff5456cb52a7-kube-proxy\") pod \"kube-proxy-m9rs5\" (UID: \"bb3181f7-c887-4677-beb3-ff5456cb52a7\") " pod="kube-system/kube-proxy-m9rs5" Jul 1 08:37:55.484580 kubelet[2724]: I0701 08:37:55.484580 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb3181f7-c887-4677-beb3-ff5456cb52a7-xtables-lock\") pod \"kube-proxy-m9rs5\" (UID: \"bb3181f7-c887-4677-beb3-ff5456cb52a7\") " pod="kube-system/kube-proxy-m9rs5" Jul 1 08:37:55.484580 kubelet[2724]: I0701 08:37:55.484599 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb3181f7-c887-4677-beb3-ff5456cb52a7-lib-modules\") pod \"kube-proxy-m9rs5\" (UID: \"bb3181f7-c887-4677-beb3-ff5456cb52a7\") " pod="kube-system/kube-proxy-m9rs5" Jul 1 08:37:56.038754 containerd[1595]: time="2025-07-01T08:37:56.038696434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9rs5,Uid:bb3181f7-c887-4677-beb3-ff5456cb52a7,Namespace:kube-system,Attempt:0,}" Jul 1 08:37:56.429362 kubelet[2724]: E0701 08:37:56.429204 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:37:56.494831 containerd[1595]: time="2025-07-01T08:37:56.494772292Z" level=info msg="connecting to shim 8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813" address="unix:///run/containerd/s/b4d454abf148b740f6ae6bc6d2b42752dd7b30307e9cd19fe0c06b77280628c2" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:37:56.495274 systemd[1]: Created slice kubepods-besteffort-pod8354bba5_4202_4893_8eed_eab96efd791a.slice - libcontainer container kubepods-besteffort-pod8354bba5_4202_4893_8eed_eab96efd791a.slice. Jul 1 08:37:56.538854 systemd[1]: Started cri-containerd-8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813.scope - libcontainer container 8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813. Jul 1 08:37:56.573017 containerd[1595]: time="2025-07-01T08:37:56.572968178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9rs5,Uid:bb3181f7-c887-4677-beb3-ff5456cb52a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813\"" Jul 1 08:37:56.574170 kubelet[2724]: E0701 08:37:56.573970 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:37:56.575922 containerd[1595]: time="2025-07-01T08:37:56.575885085Z" level=info msg="CreateContainer within sandbox \"8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 1 08:37:56.589300 kubelet[2724]: I0701 08:37:56.589161 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8354bba5-4202-4893-8eed-eab96efd791a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-gg7bc\" (UID: \"8354bba5-4202-4893-8eed-eab96efd791a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gg7bc" Jul 1 08:37:56.589300 kubelet[2724]: I0701 08:37:56.589214 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtzzk\" (UniqueName: \"kubernetes.io/projected/8354bba5-4202-4893-8eed-eab96efd791a-kube-api-access-wtzzk\") pod \"tigera-operator-5bf8dfcb4-gg7bc\" (UID: \"8354bba5-4202-4893-8eed-eab96efd791a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-gg7bc" Jul 1 08:37:56.589869 containerd[1595]: time="2025-07-01T08:37:56.589808016Z" level=info msg="Container 7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:37:56.596483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount145980805.mount: Deactivated successfully. Jul 1 08:37:56.604165 containerd[1595]: time="2025-07-01T08:37:56.604071032Z" level=info msg="CreateContainer within sandbox \"8af236e53fcf7270f3458fb1f570bf1af0503410e053cecdf9dc4b4754e9f813\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508\"" Jul 1 08:37:56.604862 containerd[1595]: time="2025-07-01T08:37:56.604833687Z" level=info msg="StartContainer for \"7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508\"" Jul 1 08:37:56.606802 containerd[1595]: time="2025-07-01T08:37:56.606754005Z" level=info msg="connecting to shim 7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508" address="unix:///run/containerd/s/b4d454abf148b740f6ae6bc6d2b42752dd7b30307e9cd19fe0c06b77280628c2" protocol=ttrpc version=3 Jul 1 08:37:56.630974 systemd[1]: Started cri-containerd-7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508.scope - libcontainer container 7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508. Jul 1 08:37:56.677788 containerd[1595]: time="2025-07-01T08:37:56.677741674Z" level=info msg="StartContainer for \"7334d5c5ec5e3be92d2e1428ea06380e4e51b5f48a1fd185691e2fd35d569508\" returns successfully" Jul 1 08:37:56.771279 kubelet[2724]: E0701 08:37:56.771131 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:37:56.771647 kubelet[2724]: E0701 08:37:56.771621 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:37:56.800778 containerd[1595]: time="2025-07-01T08:37:56.800660702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gg7bc,Uid:8354bba5-4202-4893-8eed-eab96efd791a,Namespace:tigera-operator,Attempt:0,}" Jul 1 08:37:56.889625 containerd[1595]: time="2025-07-01T08:37:56.889561032Z" level=info msg="connecting to shim 2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e" address="unix:///run/containerd/s/789a21b728e04037132328013e2d888c4b16dd377fe2140a667fd8d831b151fa" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:37:56.932092 systemd[1]: Started cri-containerd-2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e.scope - libcontainer container 2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e. Jul 1 08:37:56.991275 containerd[1595]: time="2025-07-01T08:37:56.991208335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-gg7bc,Uid:8354bba5-4202-4893-8eed-eab96efd791a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e\"" Jul 1 08:37:56.993629 containerd[1595]: time="2025-07-01T08:37:56.993595898Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 1 08:37:58.131602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405085844.mount: Deactivated successfully. Jul 1 08:37:58.757411 kubelet[2724]: I0701 08:37:58.757305 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9rs5" podStartSLOduration=3.7572780420000003 podStartE2EDuration="3.757278042s" podCreationTimestamp="2025-07-01 08:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:37:56.791013323 +0000 UTC m=+8.124244114" watchObservedRunningTime="2025-07-01 08:37:58.757278042 +0000 UTC m=+10.090508823" Jul 1 08:37:58.831304 containerd[1595]: time="2025-07-01T08:37:58.831222606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.832248 containerd[1595]: time="2025-07-01T08:37:58.832223300Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 1 08:37:58.833869 containerd[1595]: time="2025-07-01T08:37:58.833793962Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.837297 containerd[1595]: time="2025-07-01T08:37:58.837222761Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:37:58.838131 containerd[1595]: time="2025-07-01T08:37:58.838074703Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.844433539s" Jul 1 08:37:58.838131 containerd[1595]: time="2025-07-01T08:37:58.838116111Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 1 08:37:58.841027 containerd[1595]: time="2025-07-01T08:37:58.840958520Z" level=info msg="CreateContainer within sandbox \"2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 1 08:37:58.853133 containerd[1595]: time="2025-07-01T08:37:58.853074178Z" level=info msg="Container 6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:37:58.860718 containerd[1595]: time="2025-07-01T08:37:58.860662759Z" level=info msg="CreateContainer within sandbox \"2f4498a8e3ad732a37147ab04f7c3e6e7a462709730324fd39ab744e11deb76e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad\"" Jul 1 08:37:58.861302 containerd[1595]: time="2025-07-01T08:37:58.861263515Z" level=info msg="StartContainer for \"6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad\"" Jul 1 08:37:58.862118 containerd[1595]: time="2025-07-01T08:37:58.862091873Z" level=info msg="connecting to shim 6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad" address="unix:///run/containerd/s/789a21b728e04037132328013e2d888c4b16dd377fe2140a667fd8d831b151fa" protocol=ttrpc version=3 Jul 1 08:37:58.914843 systemd[1]: Started cri-containerd-6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad.scope - libcontainer container 6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad. Jul 1 08:37:58.951729 containerd[1595]: time="2025-07-01T08:37:58.951647199Z" level=info msg="StartContainer for \"6698d728333590971ef4d7a7e59da3ce7242d807c9d62d9b65b9add804cfb3ad\" returns successfully" Jul 1 08:37:59.791338 kubelet[2724]: I0701 08:37:59.791126 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-gg7bc" podStartSLOduration=1.94492296 podStartE2EDuration="3.791106652s" podCreationTimestamp="2025-07-01 08:37:56 +0000 UTC" firstStartedPulling="2025-07-01 08:37:56.992946608 +0000 UTC m=+8.326177389" lastFinishedPulling="2025-07-01 08:37:58.8391303 +0000 UTC m=+10.172361081" observedRunningTime="2025-07-01 08:37:59.791011502 +0000 UTC m=+11.124242303" watchObservedRunningTime="2025-07-01 08:37:59.791106652 +0000 UTC m=+11.124337433" Jul 1 08:38:01.393618 kubelet[2724]: E0701 08:38:01.393559 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:06.025363 sudo[1798]: pam_unix(sudo:session): session closed for user root Jul 1 08:38:06.027330 sshd[1797]: Connection closed by 10.0.0.1 port 37730 Jul 1 08:38:06.028167 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:06.031834 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:37730.service: Deactivated successfully. Jul 1 08:38:06.034872 systemd[1]: session-7.scope: Deactivated successfully. Jul 1 08:38:06.035100 systemd[1]: session-7.scope: Consumed 5.427s CPU time, 225.9M memory peak. Jul 1 08:38:06.038107 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jul 1 08:38:06.039239 systemd-logind[1560]: Removed session 7. Jul 1 08:38:08.247773 systemd[1]: Created slice kubepods-besteffort-pod1036f5ac_6863_47ef_8ed7_d08cd7515b21.slice - libcontainer container kubepods-besteffort-pod1036f5ac_6863_47ef_8ed7_d08cd7515b21.slice. Jul 1 08:38:08.259991 kubelet[2724]: I0701 08:38:08.259911 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1036f5ac-6863-47ef-8ed7-d08cd7515b21-tigera-ca-bundle\") pod \"calico-typha-7c4f65cdd6-vvvkr\" (UID: \"1036f5ac-6863-47ef-8ed7-d08cd7515b21\") " pod="calico-system/calico-typha-7c4f65cdd6-vvvkr" Jul 1 08:38:08.259991 kubelet[2724]: I0701 08:38:08.259991 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1036f5ac-6863-47ef-8ed7-d08cd7515b21-typha-certs\") pod \"calico-typha-7c4f65cdd6-vvvkr\" (UID: \"1036f5ac-6863-47ef-8ed7-d08cd7515b21\") " pod="calico-system/calico-typha-7c4f65cdd6-vvvkr" Jul 1 08:38:08.260516 kubelet[2724]: I0701 08:38:08.260030 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lf7h\" (UniqueName: \"kubernetes.io/projected/1036f5ac-6863-47ef-8ed7-d08cd7515b21-kube-api-access-5lf7h\") pod \"calico-typha-7c4f65cdd6-vvvkr\" (UID: \"1036f5ac-6863-47ef-8ed7-d08cd7515b21\") " pod="calico-system/calico-typha-7c4f65cdd6-vvvkr" Jul 1 08:38:08.516905 systemd[1]: Created slice kubepods-besteffort-podb53fe880_16fb_41cd_9e4f_89d0a1ccd8fa.slice - libcontainer container kubepods-besteffort-podb53fe880_16fb_41cd_9e4f_89d0a1ccd8fa.slice. Jul 1 08:38:08.552151 kubelet[2724]: E0701 08:38:08.552108 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:08.552807 containerd[1595]: time="2025-07-01T08:38:08.552743687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4f65cdd6-vvvkr,Uid:1036f5ac-6863-47ef-8ed7-d08cd7515b21,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:08.561843 kubelet[2724]: I0701 08:38:08.561800 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-cni-log-dir\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561843 kubelet[2724]: I0701 08:38:08.561838 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-flexvol-driver-host\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561997 kubelet[2724]: I0701 08:38:08.561858 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-lib-modules\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561997 kubelet[2724]: I0701 08:38:08.561873 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlkgl\" (UniqueName: \"kubernetes.io/projected/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-kube-api-access-mlkgl\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561997 kubelet[2724]: I0701 08:38:08.561890 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-node-certs\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561997 kubelet[2724]: I0701 08:38:08.561948 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-policysync\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.561997 kubelet[2724]: I0701 08:38:08.561983 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-var-run-calico\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.562133 kubelet[2724]: I0701 08:38:08.561999 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-xtables-lock\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.562133 kubelet[2724]: I0701 08:38:08.562016 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-cni-net-dir\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.562133 kubelet[2724]: I0701 08:38:08.562030 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-var-lib-calico\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.562133 kubelet[2724]: I0701 08:38:08.562073 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-cni-bin-dir\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.562133 kubelet[2724]: I0701 08:38:08.562105 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa-tigera-ca-bundle\") pod \"calico-node-pdbw8\" (UID: \"b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa\") " pod="calico-system/calico-node-pdbw8" Jul 1 08:38:08.822007 containerd[1595]: time="2025-07-01T08:38:08.821949546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pdbw8,Uid:b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:08.864648 kubelet[2724]: E0701 08:38:08.864545 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:08.908889 containerd[1595]: time="2025-07-01T08:38:08.908830642Z" level=info msg="connecting to shim 2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c" address="unix:///run/containerd/s/049072ce9d12775398ae6bc896212427901bb755993cc3abbe415d0c8c609d3b" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:08.912100 containerd[1595]: time="2025-07-01T08:38:08.912051591Z" level=info msg="connecting to shim 11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2" address="unix:///run/containerd/s/a5680ba6b10ad5ede9577a4827e41977f913102154e2f194dd7d14e91e6877fb" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:08.946854 systemd[1]: Started cri-containerd-11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2.scope - libcontainer container 11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2. Jul 1 08:38:08.949209 systemd[1]: Started cri-containerd-2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c.scope - libcontainer container 2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c. Jul 1 08:38:08.957398 kubelet[2724]: E0701 08:38:08.957364 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.957398 kubelet[2724]: W0701 08:38:08.957388 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.957569 kubelet[2724]: E0701 08:38:08.957424 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.957755 kubelet[2724]: E0701 08:38:08.957737 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.957755 kubelet[2724]: W0701 08:38:08.957750 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.957815 kubelet[2724]: E0701 08:38:08.957761 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.958384 kubelet[2724]: E0701 08:38:08.958365 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.958384 kubelet[2724]: W0701 08:38:08.958381 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.958450 kubelet[2724]: E0701 08:38:08.958392 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.958613 kubelet[2724]: E0701 08:38:08.958584 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.958613 kubelet[2724]: W0701 08:38:08.958597 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.958613 kubelet[2724]: E0701 08:38:08.958607 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.958919 kubelet[2724]: E0701 08:38:08.958890 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.958919 kubelet[2724]: W0701 08:38:08.958903 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.958990 kubelet[2724]: E0701 08:38:08.958923 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.959224 kubelet[2724]: E0701 08:38:08.959203 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.959224 kubelet[2724]: W0701 08:38:08.959217 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.959224 kubelet[2724]: E0701 08:38:08.959228 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.959710 kubelet[2724]: E0701 08:38:08.959692 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.959710 kubelet[2724]: W0701 08:38:08.959705 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.959775 kubelet[2724]: E0701 08:38:08.959716 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.960831 kubelet[2724]: E0701 08:38:08.960804 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.960831 kubelet[2724]: W0701 08:38:08.960819 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.960831 kubelet[2724]: E0701 08:38:08.960830 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.961084 kubelet[2724]: E0701 08:38:08.961065 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.961084 kubelet[2724]: W0701 08:38:08.961079 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.961147 kubelet[2724]: E0701 08:38:08.961089 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.961294 kubelet[2724]: E0701 08:38:08.961278 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.961294 kubelet[2724]: W0701 08:38:08.961290 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.961343 kubelet[2724]: E0701 08:38:08.961300 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.961510 kubelet[2724]: E0701 08:38:08.961488 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.961510 kubelet[2724]: W0701 08:38:08.961501 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.961585 kubelet[2724]: E0701 08:38:08.961510 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.961770 kubelet[2724]: E0701 08:38:08.961753 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.961770 kubelet[2724]: W0701 08:38:08.961766 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.961825 kubelet[2724]: E0701 08:38:08.961776 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.962023 kubelet[2724]: E0701 08:38:08.962007 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.962023 kubelet[2724]: W0701 08:38:08.962019 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.962073 kubelet[2724]: E0701 08:38:08.962029 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.962234 kubelet[2724]: E0701 08:38:08.962216 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.962234 kubelet[2724]: W0701 08:38:08.962229 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.962362 kubelet[2724]: E0701 08:38:08.962239 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.962444 kubelet[2724]: E0701 08:38:08.962426 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.962444 kubelet[2724]: W0701 08:38:08.962439 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.962498 kubelet[2724]: E0701 08:38:08.962449 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.962780 kubelet[2724]: E0701 08:38:08.962763 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.962780 kubelet[2724]: W0701 08:38:08.962775 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.962847 kubelet[2724]: E0701 08:38:08.962785 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.963025 kubelet[2724]: E0701 08:38:08.963007 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.963064 kubelet[2724]: W0701 08:38:08.963020 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.963064 kubelet[2724]: E0701 08:38:08.963038 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.963257 kubelet[2724]: E0701 08:38:08.963237 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.963257 kubelet[2724]: W0701 08:38:08.963252 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.963309 kubelet[2724]: E0701 08:38:08.963266 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.963484 kubelet[2724]: E0701 08:38:08.963467 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.963484 kubelet[2724]: W0701 08:38:08.963479 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.963551 kubelet[2724]: E0701 08:38:08.963489 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.963758 kubelet[2724]: E0701 08:38:08.963741 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.963758 kubelet[2724]: W0701 08:38:08.963754 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.963815 kubelet[2724]: E0701 08:38:08.963763 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.968487 kubelet[2724]: E0701 08:38:08.968113 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.968487 kubelet[2724]: W0701 08:38:08.968130 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.968487 kubelet[2724]: E0701 08:38:08.968141 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.968487 kubelet[2724]: I0701 08:38:08.968167 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/81a57d7c-7149-4271-9274-afe15b367e85-kubelet-dir\") pod \"csi-node-driver-prnpp\" (UID: \"81a57d7c-7149-4271-9274-afe15b367e85\") " pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:08.968487 kubelet[2724]: E0701 08:38:08.968383 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.968487 kubelet[2724]: W0701 08:38:08.968392 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.968487 kubelet[2724]: E0701 08:38:08.968414 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.968487 kubelet[2724]: I0701 08:38:08.968426 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/81a57d7c-7149-4271-9274-afe15b367e85-socket-dir\") pod \"csi-node-driver-prnpp\" (UID: \"81a57d7c-7149-4271-9274-afe15b367e85\") " pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:08.969004 kubelet[2724]: E0701 08:38:08.968708 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.969004 kubelet[2724]: W0701 08:38:08.968720 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.969004 kubelet[2724]: E0701 08:38:08.968732 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.969004 kubelet[2724]: I0701 08:38:08.968746 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22s4\" (UniqueName: \"kubernetes.io/projected/81a57d7c-7149-4271-9274-afe15b367e85-kube-api-access-k22s4\") pod \"csi-node-driver-prnpp\" (UID: \"81a57d7c-7149-4271-9274-afe15b367e85\") " pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:08.969004 kubelet[2724]: E0701 08:38:08.968961 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.969004 kubelet[2724]: W0701 08:38:08.968970 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.969004 kubelet[2724]: E0701 08:38:08.968991 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.969004 kubelet[2724]: I0701 08:38:08.969005 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/81a57d7c-7149-4271-9274-afe15b367e85-varrun\") pod \"csi-node-driver-prnpp\" (UID: \"81a57d7c-7149-4271-9274-afe15b367e85\") " pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:08.969350 kubelet[2724]: E0701 08:38:08.969301 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.969350 kubelet[2724]: W0701 08:38:08.969316 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.969350 kubelet[2724]: E0701 08:38:08.969340 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.969464 kubelet[2724]: I0701 08:38:08.969377 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/81a57d7c-7149-4271-9274-afe15b367e85-registration-dir\") pod \"csi-node-driver-prnpp\" (UID: \"81a57d7c-7149-4271-9274-afe15b367e85\") " pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:08.970027 kubelet[2724]: E0701 08:38:08.969955 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.970027 kubelet[2724]: W0701 08:38:08.969972 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.970098 kubelet[2724]: E0701 08:38:08.970039 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.970214 kubelet[2724]: E0701 08:38:08.970183 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.970214 kubelet[2724]: W0701 08:38:08.970199 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.970272 kubelet[2724]: E0701 08:38:08.970226 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.971155 kubelet[2724]: E0701 08:38:08.971132 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.971155 kubelet[2724]: W0701 08:38:08.971150 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.971332 kubelet[2724]: E0701 08:38:08.971239 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.971482 kubelet[2724]: E0701 08:38:08.971462 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.971482 kubelet[2724]: W0701 08:38:08.971477 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.971588 kubelet[2724]: E0701 08:38:08.971563 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.971846 kubelet[2724]: E0701 08:38:08.971833 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.971901 kubelet[2724]: W0701 08:38:08.971890 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.972022 kubelet[2724]: E0701 08:38:08.971998 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.972313 kubelet[2724]: E0701 08:38:08.972279 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.972313 kubelet[2724]: W0701 08:38:08.972290 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.972313 kubelet[2724]: E0701 08:38:08.972300 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.972839 kubelet[2724]: E0701 08:38:08.972802 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.972839 kubelet[2724]: W0701 08:38:08.972815 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.972839 kubelet[2724]: E0701 08:38:08.972825 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.973210 kubelet[2724]: E0701 08:38:08.973176 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.973210 kubelet[2724]: W0701 08:38:08.973187 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.973210 kubelet[2724]: E0701 08:38:08.973197 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.973603 kubelet[2724]: E0701 08:38:08.973544 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.973603 kubelet[2724]: W0701 08:38:08.973556 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.973603 kubelet[2724]: E0701 08:38:08.973566 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:08.973992 kubelet[2724]: E0701 08:38:08.973947 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:08.974079 kubelet[2724]: W0701 08:38:08.974064 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:08.974132 kubelet[2724]: E0701 08:38:08.974121 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.024195 containerd[1595]: time="2025-07-01T08:38:09.024145126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pdbw8,Uid:b53fe880-16fb-41cd-9e4f-89d0a1ccd8fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\"" Jul 1 08:38:09.025697 containerd[1595]: time="2025-07-01T08:38:09.025612020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 1 08:38:09.035121 containerd[1595]: time="2025-07-01T08:38:09.035086796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c4f65cdd6-vvvkr,Uid:1036f5ac-6863-47ef-8ed7-d08cd7515b21,Namespace:calico-system,Attempt:0,} returns sandbox id \"11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2\"" Jul 1 08:38:09.035765 kubelet[2724]: E0701 08:38:09.035716 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:09.070328 kubelet[2724]: E0701 08:38:09.070292 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.070328 kubelet[2724]: W0701 08:38:09.070324 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.070509 kubelet[2724]: E0701 08:38:09.070353 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.070733 kubelet[2724]: E0701 08:38:09.070691 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.070733 kubelet[2724]: W0701 08:38:09.070716 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.071088 kubelet[2724]: E0701 08:38:09.070750 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.071088 kubelet[2724]: E0701 08:38:09.071050 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.071088 kubelet[2724]: W0701 08:38:09.071058 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.071088 kubelet[2724]: E0701 08:38:09.071073 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.071438 kubelet[2724]: E0701 08:38:09.071417 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.071438 kubelet[2724]: W0701 08:38:09.071435 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.071529 kubelet[2724]: E0701 08:38:09.071455 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.071773 kubelet[2724]: E0701 08:38:09.071753 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.071773 kubelet[2724]: W0701 08:38:09.071768 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.071868 kubelet[2724]: E0701 08:38:09.071791 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.072118 kubelet[2724]: E0701 08:38:09.072032 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.072118 kubelet[2724]: W0701 08:38:09.072050 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.072118 kubelet[2724]: E0701 08:38:09.072071 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.072844 kubelet[2724]: E0701 08:38:09.072290 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.072844 kubelet[2724]: W0701 08:38:09.072347 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.072844 kubelet[2724]: E0701 08:38:09.072381 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.072844 kubelet[2724]: E0701 08:38:09.072535 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.072844 kubelet[2724]: W0701 08:38:09.072544 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.072844 kubelet[2724]: E0701 08:38:09.072789 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.072844 kubelet[2724]: W0701 08:38:09.072797 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.073046 kubelet[2724]: E0701 08:38:09.072890 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.073046 kubelet[2724]: E0701 08:38:09.072943 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.073046 kubelet[2724]: W0701 08:38:09.072950 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.073046 kubelet[2724]: E0701 08:38:09.072968 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.073046 kubelet[2724]: E0701 08:38:09.073032 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.073224 kubelet[2724]: E0701 08:38:09.073086 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.073224 kubelet[2724]: W0701 08:38:09.073093 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.073224 kubelet[2724]: E0701 08:38:09.073151 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.073293 kubelet[2724]: E0701 08:38:09.073279 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.073293 kubelet[2724]: W0701 08:38:09.073288 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.073342 kubelet[2724]: E0701 08:38:09.073304 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.073553 kubelet[2724]: E0701 08:38:09.073537 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.073701 kubelet[2724]: W0701 08:38:09.073617 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.073701 kubelet[2724]: E0701 08:38:09.073654 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.074041 kubelet[2724]: E0701 08:38:09.074020 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.074041 kubelet[2724]: W0701 08:38:09.074035 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.074178 kubelet[2724]: E0701 08:38:09.074139 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.075106 kubelet[2724]: E0701 08:38:09.075084 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.075106 kubelet[2724]: W0701 08:38:09.075099 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.075432 kubelet[2724]: E0701 08:38:09.075411 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.075729 kubelet[2724]: E0701 08:38:09.075702 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.075829 kubelet[2724]: W0701 08:38:09.075809 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.075975 kubelet[2724]: E0701 08:38:09.075939 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.076041 kubelet[2724]: E0701 08:38:09.076007 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.076041 kubelet[2724]: W0701 08:38:09.076015 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.076132 kubelet[2724]: E0701 08:38:09.076101 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.076300 kubelet[2724]: E0701 08:38:09.076220 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.076300 kubelet[2724]: W0701 08:38:09.076229 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.076300 kubelet[2724]: E0701 08:38:09.076257 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.076641 kubelet[2724]: E0701 08:38:09.076624 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.076641 kubelet[2724]: W0701 08:38:09.076635 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.076768 kubelet[2724]: E0701 08:38:09.076665 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.076857 kubelet[2724]: E0701 08:38:09.076840 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.076857 kubelet[2724]: W0701 08:38:09.076851 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.076923 kubelet[2724]: E0701 08:38:09.076862 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.077191 kubelet[2724]: E0701 08:38:09.077172 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.077191 kubelet[2724]: W0701 08:38:09.077188 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.077252 kubelet[2724]: E0701 08:38:09.077204 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.077435 kubelet[2724]: E0701 08:38:09.077415 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.077435 kubelet[2724]: W0701 08:38:09.077426 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.077526 kubelet[2724]: E0701 08:38:09.077441 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.077667 kubelet[2724]: E0701 08:38:09.077648 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.077667 kubelet[2724]: W0701 08:38:09.077665 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.077924 kubelet[2724]: E0701 08:38:09.077707 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.077997 kubelet[2724]: E0701 08:38:09.077978 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.077997 kubelet[2724]: W0701 08:38:09.077990 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.078070 kubelet[2724]: E0701 08:38:09.078003 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.078220 kubelet[2724]: E0701 08:38:09.078199 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.078220 kubelet[2724]: W0701 08:38:09.078213 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.078295 kubelet[2724]: E0701 08:38:09.078224 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:09.087004 kubelet[2724]: E0701 08:38:09.086976 2724 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 1 08:38:09.087004 kubelet[2724]: W0701 08:38:09.086996 2724 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 1 08:38:09.087135 kubelet[2724]: E0701 08:38:09.087021 2724 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 1 08:38:10.674175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780747967.mount: Deactivated successfully. Jul 1 08:38:10.747014 kubelet[2724]: E0701 08:38:10.746924 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:10.830508 containerd[1595]: time="2025-07-01T08:38:10.830437065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:10.831171 containerd[1595]: time="2025-07-01T08:38:10.831130441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 1 08:38:10.832258 containerd[1595]: time="2025-07-01T08:38:10.832216406Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:10.834093 containerd[1595]: time="2025-07-01T08:38:10.834046652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:10.834844 containerd[1595]: time="2025-07-01T08:38:10.834790793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.8091395s" Jul 1 08:38:10.834844 containerd[1595]: time="2025-07-01T08:38:10.834836049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 1 08:38:10.835752 containerd[1595]: time="2025-07-01T08:38:10.835725564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 1 08:38:10.837062 containerd[1595]: time="2025-07-01T08:38:10.837030300Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 1 08:38:10.849456 containerd[1595]: time="2025-07-01T08:38:10.849400755Z" level=info msg="Container 3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:10.859165 containerd[1595]: time="2025-07-01T08:38:10.859090090Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\"" Jul 1 08:38:10.860542 containerd[1595]: time="2025-07-01T08:38:10.859538784Z" level=info msg="StartContainer for \"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\"" Jul 1 08:38:10.861255 containerd[1595]: time="2025-07-01T08:38:10.861220652Z" level=info msg="connecting to shim 3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7" address="unix:///run/containerd/s/049072ce9d12775398ae6bc896212427901bb755993cc3abbe415d0c8c609d3b" protocol=ttrpc version=3 Jul 1 08:38:10.887857 systemd[1]: Started cri-containerd-3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7.scope - libcontainer container 3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7. Jul 1 08:38:10.946053 containerd[1595]: time="2025-07-01T08:38:10.945946052Z" level=info msg="StartContainer for \"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\" returns successfully" Jul 1 08:38:10.959163 systemd[1]: cri-containerd-3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7.scope: Deactivated successfully. Jul 1 08:38:10.962937 containerd[1595]: time="2025-07-01T08:38:10.962790160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\" id:\"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\" pid:3359 exited_at:{seconds:1751359090 nanos:962258117}" Jul 1 08:38:10.962937 containerd[1595]: time="2025-07-01T08:38:10.962857296Z" level=info msg="received exit event container_id:\"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\" id:\"3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7\" pid:3359 exited_at:{seconds:1751359090 nanos:962258117}" Jul 1 08:38:11.645369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ae51e6621ab449747bbdc8b9365622065c7b9f131b366de81650ba823a67fc7-rootfs.mount: Deactivated successfully. Jul 1 08:38:12.746321 kubelet[2724]: E0701 08:38:12.746259 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:14.054298 containerd[1595]: time="2025-07-01T08:38:14.054206245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:14.055355 containerd[1595]: time="2025-07-01T08:38:14.055324178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 1 08:38:14.056497 containerd[1595]: time="2025-07-01T08:38:14.056453082Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:14.058632 containerd[1595]: time="2025-07-01T08:38:14.058543675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:14.059116 containerd[1595]: time="2025-07-01T08:38:14.059070756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.223318562s" Jul 1 08:38:14.059116 containerd[1595]: time="2025-07-01T08:38:14.059103247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 1 08:38:14.060332 containerd[1595]: time="2025-07-01T08:38:14.060299908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 1 08:38:14.075412 containerd[1595]: time="2025-07-01T08:38:14.072630545Z" level=info msg="CreateContainer within sandbox \"11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 1 08:38:14.085330 containerd[1595]: time="2025-07-01T08:38:14.085258871Z" level=info msg="Container 0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:14.095629 containerd[1595]: time="2025-07-01T08:38:14.095571381Z" level=info msg="CreateContainer within sandbox \"11d30d8c1ce88e546cc72d3926e159056ca8cdad9dd7a25ec7edfb5c1738d3d2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12\"" Jul 1 08:38:14.096427 containerd[1595]: time="2025-07-01T08:38:14.096391644Z" level=info msg="StartContainer for \"0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12\"" Jul 1 08:38:14.097660 containerd[1595]: time="2025-07-01T08:38:14.097578146Z" level=info msg="connecting to shim 0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12" address="unix:///run/containerd/s/a5680ba6b10ad5ede9577a4827e41977f913102154e2f194dd7d14e91e6877fb" protocol=ttrpc version=3 Jul 1 08:38:14.124932 systemd[1]: Started cri-containerd-0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12.scope - libcontainer container 0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12. Jul 1 08:38:14.186714 containerd[1595]: time="2025-07-01T08:38:14.186599563Z" level=info msg="StartContainer for \"0fdc7b37b56599d6b7b673c6b8b7731e9745546769aaec3e7dade2290b114e12\" returns successfully" Jul 1 08:38:14.748306 kubelet[2724]: E0701 08:38:14.748182 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:14.824111 kubelet[2724]: E0701 08:38:14.824023 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:15.826098 kubelet[2724]: I0701 08:38:15.826052 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:38:15.826731 kubelet[2724]: E0701 08:38:15.826509 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:16.746913 kubelet[2724]: E0701 08:38:16.746797 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:18.747166 kubelet[2724]: E0701 08:38:18.747047 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:20.747206 kubelet[2724]: E0701 08:38:20.747130 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:21.782760 containerd[1595]: time="2025-07-01T08:38:21.782614177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:21.783879 containerd[1595]: time="2025-07-01T08:38:21.783833639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 1 08:38:21.786791 containerd[1595]: time="2025-07-01T08:38:21.786650752Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:21.791721 containerd[1595]: time="2025-07-01T08:38:21.791593429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:21.792452 containerd[1595]: time="2025-07-01T08:38:21.792284096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 7.731949202s" Jul 1 08:38:21.792452 containerd[1595]: time="2025-07-01T08:38:21.792339301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 1 08:38:21.795237 containerd[1595]: time="2025-07-01T08:38:21.795170711Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 1 08:38:22.069311 containerd[1595]: time="2025-07-01T08:38:22.069117876Z" level=info msg="Container b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:22.161257 containerd[1595]: time="2025-07-01T08:38:22.161180724Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\"" Jul 1 08:38:22.161965 containerd[1595]: time="2025-07-01T08:38:22.161912729Z" level=info msg="StartContainer for \"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\"" Jul 1 08:38:22.164463 containerd[1595]: time="2025-07-01T08:38:22.164277242Z" level=info msg="connecting to shim b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb" address="unix:///run/containerd/s/049072ce9d12775398ae6bc896212427901bb755993cc3abbe415d0c8c609d3b" protocol=ttrpc version=3 Jul 1 08:38:22.187952 systemd[1]: Started cri-containerd-b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb.scope - libcontainer container b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb. Jul 1 08:38:22.683974 containerd[1595]: time="2025-07-01T08:38:22.683911298Z" level=info msg="StartContainer for \"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\" returns successfully" Jul 1 08:38:22.746888 kubelet[2724]: E0701 08:38:22.746734 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:23.059348 kubelet[2724]: I0701 08:38:23.059264 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c4f65cdd6-vvvkr" podStartSLOduration=10.035288468 podStartE2EDuration="15.05924274s" podCreationTimestamp="2025-07-01 08:38:08 +0000 UTC" firstStartedPulling="2025-07-01 08:38:09.036048428 +0000 UTC m=+20.369279209" lastFinishedPulling="2025-07-01 08:38:14.06000269 +0000 UTC m=+25.393233481" observedRunningTime="2025-07-01 08:38:14.837945996 +0000 UTC m=+26.171176777" watchObservedRunningTime="2025-07-01 08:38:23.05924274 +0000 UTC m=+34.392473511" Jul 1 08:38:23.844540 containerd[1595]: time="2025-07-01T08:38:23.844453463Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 1 08:38:23.847944 systemd[1]: cri-containerd-b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb.scope: Deactivated successfully. Jul 1 08:38:23.848502 systemd[1]: cri-containerd-b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb.scope: Consumed 681ms CPU time, 180.3M memory peak, 1.1M read from disk, 171.2M written to disk. Jul 1 08:38:23.850178 containerd[1595]: time="2025-07-01T08:38:23.850120158Z" level=info msg="received exit event container_id:\"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\" id:\"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\" pid:3463 exited_at:{seconds:1751359103 nanos:849728282}" Jul 1 08:38:23.850310 kubelet[2724]: I0701 08:38:23.850288 2724 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 1 08:38:23.852534 containerd[1595]: time="2025-07-01T08:38:23.851900022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\" id:\"b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb\" pid:3463 exited_at:{seconds:1751359103 nanos:849728282}" Jul 1 08:38:23.892907 systemd[1]: Created slice kubepods-burstable-pod68c217b9_4f4d_48d1_bb9a_d276adb2fb78.slice - libcontainer container kubepods-burstable-pod68c217b9_4f4d_48d1_bb9a_d276adb2fb78.slice. Jul 1 08:38:23.902170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b155cc1d75bad3126338af1ed2d66322917ab25899d5bf34cbd91de8b391a2fb-rootfs.mount: Deactivated successfully. Jul 1 08:38:23.919820 systemd[1]: Created slice kubepods-besteffort-pod3a670ab3_ff29_4888_96ee_f1733e954198.slice - libcontainer container kubepods-besteffort-pod3a670ab3_ff29_4888_96ee_f1733e954198.slice. Jul 1 08:38:23.930426 systemd[1]: Created slice kubepods-besteffort-pod032f515b_c70e_4420_9aba_ae73ba857da9.slice - libcontainer container kubepods-besteffort-pod032f515b_c70e_4420_9aba_ae73ba857da9.slice. Jul 1 08:38:23.937591 systemd[1]: Created slice kubepods-besteffort-pod9c8054ba_10de_47da_9909_9fedeb482d2a.slice - libcontainer container kubepods-besteffort-pod9c8054ba_10de_47da_9909_9fedeb482d2a.slice. Jul 1 08:38:23.943091 systemd[1]: Created slice kubepods-burstable-pod96d984ee_a0f3_4d6a_a438_b9f5756b5666.slice - libcontainer container kubepods-burstable-pod96d984ee_a0f3_4d6a_a438_b9f5756b5666.slice. Jul 1 08:38:23.949505 systemd[1]: Created slice kubepods-besteffort-pod041baa15_5621_4055_a53c_77c22a6b659e.slice - libcontainer container kubepods-besteffort-pod041baa15_5621_4055_a53c_77c22a6b659e.slice. Jul 1 08:38:23.956314 systemd[1]: Created slice kubepods-besteffort-pod29de7199_a916_4327_9fb3_e361bbc61a28.slice - libcontainer container kubepods-besteffort-pod29de7199_a916_4327_9fb3_e361bbc61a28.slice. Jul 1 08:38:23.961514 systemd[1]: Created slice kubepods-besteffort-pod956b37b9_1ba9_40e9_be7f_b28196b02c8c.slice - libcontainer container kubepods-besteffort-pod956b37b9_1ba9_40e9_be7f_b28196b02c8c.slice. Jul 1 08:38:23.991847 kubelet[2724]: I0701 08:38:23.991772 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxd4\" (UniqueName: \"kubernetes.io/projected/68c217b9-4f4d-48d1-bb9a-d276adb2fb78-kube-api-access-xqxd4\") pod \"coredns-7c65d6cfc9-wtpdv\" (UID: \"68c217b9-4f4d-48d1-bb9a-d276adb2fb78\") " pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:23.992102 kubelet[2724]: I0701 08:38:23.991908 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnwm\" (UniqueName: \"kubernetes.io/projected/3a670ab3-ff29-4888-96ee-f1733e954198-kube-api-access-nsnwm\") pod \"calico-kube-controllers-6cc74d4c7f-4blk4\" (UID: \"3a670ab3-ff29-4888-96ee-f1733e954198\") " pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:23.992102 kubelet[2724]: I0701 08:38:23.991950 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/032f515b-c70e-4420-9aba-ae73ba857da9-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-fg4hg\" (UID: \"032f515b-c70e-4420-9aba-ae73ba857da9\") " pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:23.992102 kubelet[2724]: I0701 08:38:23.991976 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a670ab3-ff29-4888-96ee-f1733e954198-tigera-ca-bundle\") pod \"calico-kube-controllers-6cc74d4c7f-4blk4\" (UID: \"3a670ab3-ff29-4888-96ee-f1733e954198\") " pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:23.992102 kubelet[2724]: I0701 08:38:23.992002 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-ca-bundle\") pod \"whisker-5bf6fcc594-hj7vt\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:23.992102 kubelet[2724]: I0701 08:38:23.992025 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/041baa15-5621-4055-a53c-77c22a6b659e-calico-apiserver-certs\") pod \"calico-apiserver-56f667675c-mzg28\" (UID: \"041baa15-5621-4055-a53c-77c22a6b659e\") " pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" Jul 1 08:38:23.992294 kubelet[2724]: I0701 08:38:23.992061 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms2lk\" (UniqueName: \"kubernetes.io/projected/29de7199-a916-4327-9fb3-e361bbc61a28-kube-api-access-ms2lk\") pod \"whisker-5bf6fcc594-hj7vt\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:23.992294 kubelet[2724]: I0701 08:38:23.992099 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032f515b-c70e-4420-9aba-ae73ba857da9-config\") pod \"goldmane-58fd7646b9-fg4hg\" (UID: \"032f515b-c70e-4420-9aba-ae73ba857da9\") " pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:23.992294 kubelet[2724]: I0701 08:38:23.992148 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-backend-key-pair\") pod \"whisker-5bf6fcc594-hj7vt\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:23.992294 kubelet[2724]: I0701 08:38:23.992178 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hq6h\" (UniqueName: \"kubernetes.io/projected/96d984ee-a0f3-4d6a-a438-b9f5756b5666-kube-api-access-5hq6h\") pod \"coredns-7c65d6cfc9-r2l9j\" (UID: \"96d984ee-a0f3-4d6a-a438-b9f5756b5666\") " pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:23.992294 kubelet[2724]: I0701 08:38:23.992204 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g486f\" (UniqueName: \"kubernetes.io/projected/9c8054ba-10de-47da-9909-9fedeb482d2a-kube-api-access-g486f\") pod \"calico-apiserver-785dd9b466-97bdw\" (UID: \"9c8054ba-10de-47da-9909-9fedeb482d2a\") " pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:23.992867 kubelet[2724]: I0701 08:38:23.992813 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68c217b9-4f4d-48d1-bb9a-d276adb2fb78-config-volume\") pod \"coredns-7c65d6cfc9-wtpdv\" (UID: \"68c217b9-4f4d-48d1-bb9a-d276adb2fb78\") " pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:23.992957 kubelet[2724]: I0701 08:38:23.992926 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crpm\" (UniqueName: \"kubernetes.io/projected/956b37b9-1ba9-40e9-be7f-b28196b02c8c-kube-api-access-9crpm\") pod \"calico-apiserver-785dd9b466-gfqj5\" (UID: \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\") " pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:23.993007 kubelet[2724]: I0701 08:38:23.992974 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/956b37b9-1ba9-40e9-be7f-b28196b02c8c-calico-apiserver-certs\") pod \"calico-apiserver-785dd9b466-gfqj5\" (UID: \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\") " pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:23.993695 kubelet[2724]: I0701 08:38:23.993228 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/032f515b-c70e-4420-9aba-ae73ba857da9-goldmane-key-pair\") pod \"goldmane-58fd7646b9-fg4hg\" (UID: \"032f515b-c70e-4420-9aba-ae73ba857da9\") " pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:23.993695 kubelet[2724]: I0701 08:38:23.993396 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpmt\" (UniqueName: \"kubernetes.io/projected/041baa15-5621-4055-a53c-77c22a6b659e-kube-api-access-7fpmt\") pod \"calico-apiserver-56f667675c-mzg28\" (UID: \"041baa15-5621-4055-a53c-77c22a6b659e\") " pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" Jul 1 08:38:23.993695 kubelet[2724]: I0701 08:38:23.993430 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47gc\" (UniqueName: \"kubernetes.io/projected/032f515b-c70e-4420-9aba-ae73ba857da9-kube-api-access-q47gc\") pod \"goldmane-58fd7646b9-fg4hg\" (UID: \"032f515b-c70e-4420-9aba-ae73ba857da9\") " pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:23.993695 kubelet[2724]: I0701 08:38:23.993533 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96d984ee-a0f3-4d6a-a438-b9f5756b5666-config-volume\") pod \"coredns-7c65d6cfc9-r2l9j\" (UID: \"96d984ee-a0f3-4d6a-a438-b9f5756b5666\") " pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:23.994019 kubelet[2724]: I0701 08:38:23.993940 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c8054ba-10de-47da-9909-9fedeb482d2a-calico-apiserver-certs\") pod \"calico-apiserver-785dd9b466-97bdw\" (UID: \"9c8054ba-10de-47da-9909-9fedeb482d2a\") " pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:24.511549 kubelet[2724]: E0701 08:38:24.511458 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:24.512348 containerd[1595]: time="2025-07-01T08:38:24.512296575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:24.528734 containerd[1595]: time="2025-07-01T08:38:24.528642066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:24.537151 containerd[1595]: time="2025-07-01T08:38:24.536529731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:24.542159 containerd[1595]: time="2025-07-01T08:38:24.542051793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:24.546376 kubelet[2724]: E0701 08:38:24.546009 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:24.547076 containerd[1595]: time="2025-07-01T08:38:24.547026466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:24.553700 containerd[1595]: time="2025-07-01T08:38:24.553617686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f667675c-mzg28,Uid:041baa15-5621-4055-a53c-77c22a6b659e,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:24.560247 containerd[1595]: time="2025-07-01T08:38:24.560180483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf6fcc594-hj7vt,Uid:29de7199-a916-4327-9fb3-e361bbc61a28,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:24.565856 containerd[1595]: time="2025-07-01T08:38:24.565797182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:24.697072 containerd[1595]: time="2025-07-01T08:38:24.696849992Z" level=error msg="Failed to destroy network for sandbox \"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.707074 containerd[1595]: time="2025-07-01T08:38:24.706980239Z" level=error msg="Failed to destroy network for sandbox \"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.707264 containerd[1595]: time="2025-07-01T08:38:24.706981001Z" level=error msg="Failed to destroy network for sandbox \"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.715017 containerd[1595]: time="2025-07-01T08:38:24.714924711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.716383 containerd[1595]: time="2025-07-01T08:38:24.716209805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.716383 containerd[1595]: time="2025-07-01T08:38:24.716320042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.728505 kubelet[2724]: E0701 08:38:24.728435 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.728756 kubelet[2724]: E0701 08:38:24.728568 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.728756 kubelet[2724]: E0701 08:38:24.728612 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:24.728756 kubelet[2724]: E0701 08:38:24.728633 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:24.728756 kubelet[2724]: E0701 08:38:24.728714 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:24.729010 kubelet[2724]: E0701 08:38:24.728732 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:24.729010 kubelet[2724]: E0701 08:38:24.728756 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wtpdv_kube-system(68c217b9-4f4d-48d1-bb9a-d276adb2fb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wtpdv_kube-system(68c217b9-4f4d-48d1-bb9a-d276adb2fb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44ae36669ae8c1e6a4540e3dbf940366345c8d812e491685cca44c4a6609d9cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wtpdv" podUID="68c217b9-4f4d-48d1-bb9a-d276adb2fb78" Jul 1 08:38:24.729010 kubelet[2724]: E0701 08:38:24.728695 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fg4hg_calico-system(032f515b-c70e-4420-9aba-ae73ba857da9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fg4hg_calico-system(032f515b-c70e-4420-9aba-ae73ba857da9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"588d0c1dcc7cea73e7fb3462c36b76d22f9e78e2296f4597d1e8b491460ee50f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fg4hg" podUID="032f515b-c70e-4420-9aba-ae73ba857da9" Jul 1 08:38:24.729299 kubelet[2724]: E0701 08:38:24.728725 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.729299 kubelet[2724]: E0701 08:38:24.729055 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:24.729299 kubelet[2724]: E0701 08:38:24.729091 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:24.729420 kubelet[2724]: E0701 08:38:24.729151 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785dd9b466-gfqj5_calico-apiserver(956b37b9-1ba9-40e9-be7f-b28196b02c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785dd9b466-gfqj5_calico-apiserver(956b37b9-1ba9-40e9-be7f-b28196b02c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f64663d0b12776e674e19ec43d1e4b92ef584c94ba81b5e09a8aa0a582967ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" podUID="956b37b9-1ba9-40e9-be7f-b28196b02c8c" Jul 1 08:38:24.732660 containerd[1595]: time="2025-07-01T08:38:24.732609006Z" level=error msg="Failed to destroy network for sandbox \"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.734783 containerd[1595]: time="2025-07-01T08:38:24.734717657Z" level=error msg="Failed to destroy network for sandbox \"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.735965 containerd[1595]: time="2025-07-01T08:38:24.735666158Z" level=error msg="Failed to destroy network for sandbox \"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.736411 containerd[1595]: time="2025-07-01T08:38:24.736353240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.736821 kubelet[2724]: E0701 08:38:24.736661 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.736913 kubelet[2724]: E0701 08:38:24.736848 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:24.736913 kubelet[2724]: E0701 08:38:24.736874 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:24.736986 containerd[1595]: time="2025-07-01T08:38:24.736645509Z" level=error msg="Failed to destroy network for sandbox \"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.737021 kubelet[2724]: E0701 08:38:24.736932 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r2l9j_kube-system(96d984ee-a0f3-4d6a-a438-b9f5756b5666)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r2l9j_kube-system(96d984ee-a0f3-4d6a-a438-b9f5756b5666)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"522113bce84f9a9282d6525af31e6097e4e037c81dcecba136974f8658830223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r2l9j" podUID="96d984ee-a0f3-4d6a-a438-b9f5756b5666" Jul 1 08:38:24.738143 containerd[1595]: time="2025-07-01T08:38:24.738016864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.738424 kubelet[2724]: E0701 08:38:24.738381 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.738496 kubelet[2724]: E0701 08:38:24.738455 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:24.738496 kubelet[2724]: E0701 08:38:24.738482 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:24.738596 kubelet[2724]: E0701 08:38:24.738548 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785dd9b466-97bdw_calico-apiserver(9c8054ba-10de-47da-9909-9fedeb482d2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785dd9b466-97bdw_calico-apiserver(9c8054ba-10de-47da-9909-9fedeb482d2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"373b9037728374d6235cb97c684e31416c7d2c2966ed3c6d53f9842ba4ce5176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" podUID="9c8054ba-10de-47da-9909-9fedeb482d2a" Jul 1 08:38:24.739866 containerd[1595]: time="2025-07-01T08:38:24.739828968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f667675c-mzg28,Uid:041baa15-5621-4055-a53c-77c22a6b659e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.740131 kubelet[2724]: E0701 08:38:24.740085 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.740131 kubelet[2724]: E0701 08:38:24.740128 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" Jul 1 08:38:24.740131 kubelet[2724]: E0701 08:38:24.740148 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" Jul 1 08:38:24.740470 kubelet[2724]: E0701 08:38:24.740183 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f667675c-mzg28_calico-apiserver(041baa15-5621-4055-a53c-77c22a6b659e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f667675c-mzg28_calico-apiserver(041baa15-5621-4055-a53c-77c22a6b659e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b433aca71388856de293a408700283dc9c6ec44ef0bb6f3349b8bf349e1199f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" podUID="041baa15-5621-4055-a53c-77c22a6b659e" Jul 1 08:38:24.741781 containerd[1595]: time="2025-07-01T08:38:24.741729938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.741921 kubelet[2724]: E0701 08:38:24.741886 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.741963 kubelet[2724]: E0701 08:38:24.741930 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:24.741963 kubelet[2724]: E0701 08:38:24.741954 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:24.742021 kubelet[2724]: E0701 08:38:24.741993 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cc74d4c7f-4blk4_calico-system(3a670ab3-ff29-4888-96ee-f1733e954198)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cc74d4c7f-4blk4_calico-system(3a670ab3-ff29-4888-96ee-f1733e954198)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb739ec91136fa96470146c9dc89265fbb6d8165a05b6c5af727f8a443ec877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" podUID="3a670ab3-ff29-4888-96ee-f1733e954198" Jul 1 08:38:24.757245 systemd[1]: Created slice kubepods-besteffort-pod81a57d7c_7149_4271_9274_afe15b367e85.slice - libcontainer container kubepods-besteffort-pod81a57d7c_7149_4271_9274_afe15b367e85.slice. Jul 1 08:38:24.761913 containerd[1595]: time="2025-07-01T08:38:24.761648610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prnpp,Uid:81a57d7c-7149-4271-9274-afe15b367e85,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:24.773796 containerd[1595]: time="2025-07-01T08:38:24.773719502Z" level=error msg="Failed to destroy network for sandbox \"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.777603 containerd[1595]: time="2025-07-01T08:38:24.777510844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf6fcc594-hj7vt,Uid:29de7199-a916-4327-9fb3-e361bbc61a28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.778130 kubelet[2724]: E0701 08:38:24.778070 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.778194 kubelet[2724]: E0701 08:38:24.778156 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:24.778194 kubelet[2724]: E0701 08:38:24.778182 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:24.778284 kubelet[2724]: E0701 08:38:24.778250 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bf6fcc594-hj7vt_calico-system(29de7199-a916-4327-9fb3-e361bbc61a28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bf6fcc594-hj7vt_calico-system(29de7199-a916-4327-9fb3-e361bbc61a28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00429876df425ee508e414f61902c5d402799e0717822e0c8bc0920aaac97c18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bf6fcc594-hj7vt" podUID="29de7199-a916-4327-9fb3-e361bbc61a28" Jul 1 08:38:24.837663 containerd[1595]: time="2025-07-01T08:38:24.837578555Z" level=error msg="Failed to destroy network for sandbox \"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.839448 containerd[1595]: time="2025-07-01T08:38:24.839405577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prnpp,Uid:81a57d7c-7149-4271-9274-afe15b367e85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.839909 kubelet[2724]: E0701 08:38:24.839857 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:24.839986 kubelet[2724]: E0701 08:38:24.839938 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:24.839986 kubelet[2724]: E0701 08:38:24.839973 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-prnpp" Jul 1 08:38:24.840093 kubelet[2724]: E0701 08:38:24.840028 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-prnpp_calico-system(81a57d7c-7149-4271-9274-afe15b367e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-prnpp_calico-system(81a57d7c-7149-4271-9274-afe15b367e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70e1953e3fd59abf421476e84d48dd39b739091701d91df79b5b82fadb692127\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-prnpp" podUID="81a57d7c-7149-4271-9274-afe15b367e85" Jul 1 08:38:24.856849 containerd[1595]: time="2025-07-01T08:38:24.856607507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 1 08:38:31.094709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779977283.mount: Deactivated successfully. Jul 1 08:38:34.146850 kubelet[2724]: E0701 08:38:34.146799 2724 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.401s" Jul 1 08:38:35.593146 containerd[1595]: time="2025-07-01T08:38:35.593065858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:35.615821 containerd[1595]: time="2025-07-01T08:38:35.615658265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 1 08:38:35.645241 containerd[1595]: time="2025-07-01T08:38:35.645153837Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:35.699845 containerd[1595]: time="2025-07-01T08:38:35.699723071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:35.700815 containerd[1595]: time="2025-07-01T08:38:35.700630513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.843954277s" Jul 1 08:38:35.700815 containerd[1595]: time="2025-07-01T08:38:35.700804491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 1 08:38:35.710860 containerd[1595]: time="2025-07-01T08:38:35.710788296Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 1 08:38:35.747564 kubelet[2724]: E0701 08:38:35.747463 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:35.748031 containerd[1595]: time="2025-07-01T08:38:35.747526769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:35.748031 containerd[1595]: time="2025-07-01T08:38:35.747991681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:35.748229 kubelet[2724]: E0701 08:38:35.748169 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:35.748452 containerd[1595]: time="2025-07-01T08:38:35.748416449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:36.140316 containerd[1595]: time="2025-07-01T08:38:36.140234350Z" level=error msg="Failed to destroy network for sandbox \"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.249090 containerd[1595]: time="2025-07-01T08:38:36.249013173Z" level=error msg="Failed to destroy network for sandbox \"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.393410 kubelet[2724]: I0701 08:38:36.393050 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:38:36.399139 kubelet[2724]: E0701 08:38:36.399086 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:36.418813 containerd[1595]: time="2025-07-01T08:38:36.418722451Z" level=error msg="Failed to destroy network for sandbox \"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.446529 containerd[1595]: time="2025-07-01T08:38:36.446436497Z" level=info msg="Container 8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:36.481107 containerd[1595]: time="2025-07-01T08:38:36.481025003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.481466 kubelet[2724]: E0701 08:38:36.481398 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.481540 kubelet[2724]: E0701 08:38:36.481489 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:36.481540 kubelet[2724]: E0701 08:38:36.481518 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" Jul 1 08:38:36.481625 kubelet[2724]: E0701 08:38:36.481575 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cc74d4c7f-4blk4_calico-system(3a670ab3-ff29-4888-96ee-f1733e954198)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cc74d4c7f-4blk4_calico-system(3a670ab3-ff29-4888-96ee-f1733e954198)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d107623a5d002abca3c6fe72c59df013a2d316364ec95d53e76975504ab66e79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" podUID="3a670ab3-ff29-4888-96ee-f1733e954198" Jul 1 08:38:36.523046 containerd[1595]: time="2025-07-01T08:38:36.522938443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.523377 kubelet[2724]: E0701 08:38:36.523317 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.523442 kubelet[2724]: E0701 08:38:36.523396 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:36.523442 kubelet[2724]: E0701 08:38:36.523424 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-r2l9j" Jul 1 08:38:36.523522 kubelet[2724]: E0701 08:38:36.523485 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-r2l9j_kube-system(96d984ee-a0f3-4d6a-a438-b9f5756b5666)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-r2l9j_kube-system(96d984ee-a0f3-4d6a-a438-b9f5756b5666)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fae4fd81bd3ee178cdfef6e75b88d5bc6988c2291856281b27d805c2591ea7b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-r2l9j" podUID="96d984ee-a0f3-4d6a-a438-b9f5756b5666" Jul 1 08:38:36.549831 containerd[1595]: time="2025-07-01T08:38:36.549734836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.550127 kubelet[2724]: E0701 08:38:36.550065 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:36.550272 kubelet[2724]: E0701 08:38:36.550144 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:36.550272 kubelet[2724]: E0701 08:38:36.550169 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wtpdv" Jul 1 08:38:36.550272 kubelet[2724]: E0701 08:38:36.550217 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wtpdv_kube-system(68c217b9-4f4d-48d1-bb9a-d276adb2fb78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wtpdv_kube-system(68c217b9-4f4d-48d1-bb9a-d276adb2fb78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39ce82ad53f3c8be8ddd878d227929dbf1ec3c226f328f20d3e1ff494faaf9e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wtpdv" podUID="68c217b9-4f4d-48d1-bb9a-d276adb2fb78" Jul 1 08:38:36.707815 systemd[1]: run-netns-cni\x2d23f27be6\x2db28e\x2dbd79\x2df88b\x2d1113feb72eba.mount: Deactivated successfully. Jul 1 08:38:36.707946 systemd[1]: run-netns-cni\x2ddcc8fedb\x2d6bba\x2d5414\x2de922\x2d3fed64af9c70.mount: Deactivated successfully. Jul 1 08:38:36.708018 systemd[1]: run-netns-cni\x2d996008a8\x2d8a07\x2d2985\x2d5192\x2d5082e88d9c27.mount: Deactivated successfully. Jul 1 08:38:36.751327 containerd[1595]: time="2025-07-01T08:38:36.751276900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:36.752541 containerd[1595]: time="2025-07-01T08:38:36.752440464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:36.929903 kubelet[2724]: E0701 08:38:36.929868 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:37.106120 containerd[1595]: time="2025-07-01T08:38:37.106017651Z" level=info msg="CreateContainer within sandbox \"2185efd5b2678eedb8b9fe2cf4529795a3c169066f1ffd4d3c5efefdc9b6db6c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\"" Jul 1 08:38:37.109606 containerd[1595]: time="2025-07-01T08:38:37.109566961Z" level=info msg="StartContainer for \"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\"" Jul 1 08:38:37.111261 containerd[1595]: time="2025-07-01T08:38:37.111224521Z" level=info msg="connecting to shim 8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f" address="unix:///run/containerd/s/049072ce9d12775398ae6bc896212427901bb755993cc3abbe415d0c8c609d3b" protocol=ttrpc version=3 Jul 1 08:38:37.141969 systemd[1]: Started cri-containerd-8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f.scope - libcontainer container 8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f. Jul 1 08:38:37.253740 containerd[1595]: time="2025-07-01T08:38:37.252706077Z" level=error msg="Failed to destroy network for sandbox \"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:37.256341 systemd[1]: run-netns-cni\x2d812674e1\x2d4728\x2dad9a\x2d146e\x2d5fababfe5c65.mount: Deactivated successfully. Jul 1 08:38:37.294876 containerd[1595]: time="2025-07-01T08:38:37.294828597Z" level=info msg="StartContainer for \"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" returns successfully" Jul 1 08:38:37.310952 containerd[1595]: time="2025-07-01T08:38:37.310856110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:37.311645 kubelet[2724]: E0701 08:38:37.311587 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:37.311900 kubelet[2724]: E0701 08:38:37.311867 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:37.311990 kubelet[2724]: E0701 08:38:37.311972 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-fg4hg" Jul 1 08:38:37.312121 kubelet[2724]: E0701 08:38:37.312096 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-fg4hg_calico-system(032f515b-c70e-4420-9aba-ae73ba857da9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-fg4hg_calico-system(032f515b-c70e-4420-9aba-ae73ba857da9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"394396f3b9b018db6baaad72cf05d6399de86876411ec3bbe2f7e3c686c838bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-fg4hg" podUID="032f515b-c70e-4420-9aba-ae73ba857da9" Jul 1 08:38:37.364620 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 1 08:38:37.366168 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 1 08:38:37.401411 containerd[1595]: time="2025-07-01T08:38:37.401318976Z" level=error msg="Failed to destroy network for sandbox \"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:37.708610 systemd[1]: run-netns-cni\x2d149ec0ac\x2d308f\x2d9907\x2db9dc\x2d3751a43b2e39.mount: Deactivated successfully. Jul 1 08:38:37.746875 containerd[1595]: time="2025-07-01T08:38:37.746715650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:37.747071 containerd[1595]: time="2025-07-01T08:38:37.746916267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf6fcc594-hj7vt,Uid:29de7199-a916-4327-9fb3-e361bbc61a28,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:38.033653 kubelet[2724]: I0701 08:38:38.033494 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pdbw8" podStartSLOduration=3.356982661 podStartE2EDuration="30.033473842s" podCreationTimestamp="2025-07-01 08:38:08 +0000 UTC" firstStartedPulling="2025-07-01 08:38:09.025172892 +0000 UTC m=+20.358403673" lastFinishedPulling="2025-07-01 08:38:35.701664073 +0000 UTC m=+47.034894854" observedRunningTime="2025-07-01 08:38:38.033224865 +0000 UTC m=+49.366455646" watchObservedRunningTime="2025-07-01 08:38:38.033473842 +0000 UTC m=+49.366704623" Jul 1 08:38:38.037328 containerd[1595]: time="2025-07-01T08:38:38.037261508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.037809 kubelet[2724]: E0701 08:38:38.037753 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.037893 kubelet[2724]: E0701 08:38:38.037846 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:38.037893 kubelet[2724]: E0701 08:38:38.037874 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" Jul 1 08:38:38.038020 kubelet[2724]: E0701 08:38:38.037944 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785dd9b466-gfqj5_calico-apiserver(956b37b9-1ba9-40e9-be7f-b28196b02c8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785dd9b466-gfqj5_calico-apiserver(956b37b9-1ba9-40e9-be7f-b28196b02c8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a711d8fd43dbc3cb5f24c1e172d63ec092005b7905085df7e269264ca8520b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" podUID="956b37b9-1ba9-40e9-be7f-b28196b02c8c" Jul 1 08:38:38.169273 containerd[1595]: time="2025-07-01T08:38:38.169197141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" id:\"98dbcb126e10a67824a97e154eb52ed421072c5adb14553bc80e7ca66b855baa\" pid:4025 exit_status:1 exited_at:{seconds:1751359118 nanos:168310277}" Jul 1 08:38:38.342842 containerd[1595]: time="2025-07-01T08:38:38.342708874Z" level=error msg="Failed to destroy network for sandbox \"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.345378 systemd[1]: run-netns-cni\x2db8d90510\x2d4c6d\x2d78f6\x2d4a97\x2de66d9030e72d.mount: Deactivated successfully. Jul 1 08:38:38.371329 containerd[1595]: time="2025-07-01T08:38:38.371234738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bf6fcc594-hj7vt,Uid:29de7199-a916-4327-9fb3-e361bbc61a28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.371652 kubelet[2724]: E0701 08:38:38.371576 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.373734 kubelet[2724]: E0701 08:38:38.373076 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:38.373734 kubelet[2724]: E0701 08:38:38.373120 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bf6fcc594-hj7vt" Jul 1 08:38:38.373734 kubelet[2724]: E0701 08:38:38.373192 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bf6fcc594-hj7vt_calico-system(29de7199-a916-4327-9fb3-e361bbc61a28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bf6fcc594-hj7vt_calico-system(29de7199-a916-4327-9fb3-e361bbc61a28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e8dc2ddd701812ba44e699c254e217936e5995edc1096156842d8cd360ccad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bf6fcc594-hj7vt" podUID="29de7199-a916-4327-9fb3-e361bbc61a28" Jul 1 08:38:38.395011 containerd[1595]: time="2025-07-01T08:38:38.394946278Z" level=error msg="Failed to destroy network for sandbox \"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.398137 systemd[1]: run-netns-cni\x2d1da45521\x2dc74d\x2d9c47\x2d1cf9\x2dc51fc4011227.mount: Deactivated successfully. Jul 1 08:38:38.412884 containerd[1595]: time="2025-07-01T08:38:38.412797774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.413148 kubelet[2724]: E0701 08:38:38.413105 2724 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 1 08:38:38.413254 kubelet[2724]: E0701 08:38:38.413182 2724 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:38.413254 kubelet[2724]: E0701 08:38:38.413231 2724 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" Jul 1 08:38:38.413333 kubelet[2724]: E0701 08:38:38.413294 2724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785dd9b466-97bdw_calico-apiserver(9c8054ba-10de-47da-9909-9fedeb482d2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785dd9b466-97bdw_calico-apiserver(9c8054ba-10de-47da-9909-9fedeb482d2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f0a166767238db8a5300b0bfad266ff4658b2159a993ac741f526d87ec04b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" podUID="9c8054ba-10de-47da-9909-9fedeb482d2a" Jul 1 08:38:38.748350 containerd[1595]: time="2025-07-01T08:38:38.748094176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f667675c-mzg28,Uid:041baa15-5621-4055-a53c-77c22a6b659e,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:39.032423 containerd[1595]: time="2025-07-01T08:38:39.032366338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" id:\"bb0ce0f4849438535139af6103de32ca6d59fef31def2249ede75f01183253fe\" pid:4140 exit_status:1 exited_at:{seconds:1751359119 nanos:31984561}" Jul 1 08:38:39.303968 kubelet[2724]: I0701 08:38:39.303812 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-ca-bundle\") pod \"29de7199-a916-4327-9fb3-e361bbc61a28\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " Jul 1 08:38:39.303968 kubelet[2724]: I0701 08:38:39.303855 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms2lk\" (UniqueName: \"kubernetes.io/projected/29de7199-a916-4327-9fb3-e361bbc61a28-kube-api-access-ms2lk\") pod \"29de7199-a916-4327-9fb3-e361bbc61a28\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " Jul 1 08:38:39.303968 kubelet[2724]: I0701 08:38:39.303876 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-backend-key-pair\") pod \"29de7199-a916-4327-9fb3-e361bbc61a28\" (UID: \"29de7199-a916-4327-9fb3-e361bbc61a28\") " Jul 1 08:38:39.304498 kubelet[2724]: I0701 08:38:39.304432 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "29de7199-a916-4327-9fb3-e361bbc61a28" (UID: "29de7199-a916-4327-9fb3-e361bbc61a28"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 1 08:38:39.307857 kubelet[2724]: I0701 08:38:39.307801 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "29de7199-a916-4327-9fb3-e361bbc61a28" (UID: "29de7199-a916-4327-9fb3-e361bbc61a28"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 1 08:38:39.308103 kubelet[2724]: I0701 08:38:39.308069 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29de7199-a916-4327-9fb3-e361bbc61a28-kube-api-access-ms2lk" (OuterVolumeSpecName: "kube-api-access-ms2lk") pod "29de7199-a916-4327-9fb3-e361bbc61a28" (UID: "29de7199-a916-4327-9fb3-e361bbc61a28"). InnerVolumeSpecName "kube-api-access-ms2lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 1 08:38:39.309110 systemd[1]: var-lib-kubelet-pods-29de7199\x2da916\x2d4327\x2d9fb3\x2de361bbc61a28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dms2lk.mount: Deactivated successfully. Jul 1 08:38:39.309244 systemd[1]: var-lib-kubelet-pods-29de7199\x2da916\x2d4327\x2d9fb3\x2de361bbc61a28-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 1 08:38:39.404700 kubelet[2724]: I0701 08:38:39.404623 2724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms2lk\" (UniqueName: \"kubernetes.io/projected/29de7199-a916-4327-9fb3-e361bbc61a28-kube-api-access-ms2lk\") on node \"localhost\" DevicePath \"\"" Jul 1 08:38:39.404700 kubelet[2724]: I0701 08:38:39.404669 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 1 08:38:39.404700 kubelet[2724]: I0701 08:38:39.404720 2724 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29de7199-a916-4327-9fb3-e361bbc61a28-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 1 08:38:39.747808 containerd[1595]: time="2025-07-01T08:38:39.747646000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prnpp,Uid:81a57d7c-7149-4271-9274-afe15b367e85,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:39.950590 systemd[1]: Removed slice kubepods-besteffort-pod29de7199_a916_4327_9fb3_e361bbc61a28.slice - libcontainer container kubepods-besteffort-pod29de7199_a916_4327_9fb3_e361bbc61a28.slice. Jul 1 08:38:40.168211 systemd[1]: Created slice kubepods-besteffort-pod66b501c7_d205_4be7_b310_b89ad5a1f814.slice - libcontainer container kubepods-besteffort-pod66b501c7_d205_4be7_b310_b89ad5a1f814.slice. Jul 1 08:38:40.213295 kubelet[2724]: I0701 08:38:40.213226 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b501c7-d205-4be7-b310-b89ad5a1f814-whisker-ca-bundle\") pod \"whisker-7f7846d754-zvgtw\" (UID: \"66b501c7-d205-4be7-b310-b89ad5a1f814\") " pod="calico-system/whisker-7f7846d754-zvgtw" Jul 1 08:38:40.213295 kubelet[2724]: I0701 08:38:40.213288 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/66b501c7-d205-4be7-b310-b89ad5a1f814-whisker-backend-key-pair\") pod \"whisker-7f7846d754-zvgtw\" (UID: \"66b501c7-d205-4be7-b310-b89ad5a1f814\") " pod="calico-system/whisker-7f7846d754-zvgtw" Jul 1 08:38:40.213295 kubelet[2724]: I0701 08:38:40.213310 2724 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pgdq\" (UniqueName: \"kubernetes.io/projected/66b501c7-d205-4be7-b310-b89ad5a1f814-kube-api-access-2pgdq\") pod \"whisker-7f7846d754-zvgtw\" (UID: \"66b501c7-d205-4be7-b310-b89ad5a1f814\") " pod="calico-system/whisker-7f7846d754-zvgtw" Jul 1 08:38:40.484446 containerd[1595]: time="2025-07-01T08:38:40.484290201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7846d754-zvgtw,Uid:66b501c7-d205-4be7-b310-b89ad5a1f814,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:40.529278 systemd-networkd[1484]: cali19ea3a1e559: Link UP Jul 1 08:38:40.530400 systemd-networkd[1484]: cali19ea3a1e559: Gained carrier Jul 1 08:38:40.749474 kubelet[2724]: I0701 08:38:40.749324 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29de7199-a916-4327-9fb3-e361bbc61a28" path="/var/lib/kubelet/pods/29de7199-a916-4327-9fb3-e361bbc61a28/volumes" Jul 1 08:38:41.025528 containerd[1595]: 2025-07-01 08:38:38.977 [INFO][4116] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:38:41.025528 containerd[1595]: 2025-07-01 08:38:39.684 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0 calico-apiserver-56f667675c- calico-apiserver 041baa15-5621-4055-a53c-77c22a6b659e 870 0 2025-07-01 08:38:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f667675c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56f667675c-mzg28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19ea3a1e559 [] [] }} ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-" Jul 1 08:38:41.025528 containerd[1595]: 2025-07-01 08:38:39.705 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.025528 containerd[1595]: 2025-07-01 08:38:40.089 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" HandleID="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Workload="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.090 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" HandleID="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Workload="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000416190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f667675c-mzg28", "timestamp":"2025-07-01 08:38:40.089968601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.090 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.091 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.091 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.134 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.181 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.194 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.199 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.026235 containerd[1595]: 2025-07-01 08:38:40.204 [INFO][4170] ipam/ipam.go 208: Affinity has not been confirmed - attempt to confirm it cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.212 [ERROR][4170] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(localhost-192-168-88-128-26) Name="localhost-192-168-88-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"pending", Node:"localhost", Type:"host", CIDR:"192.168.88.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "localhost-192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.213 [WARNING][4170] ipam/ipam.go 212: Error marking affinity as pending as part of confirmation process cidr=192.168.88.128/26 error=update conflict: BlockAffinity(localhost-192-168-88-128-26) host="localhost" Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.213 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.218 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.252 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.026546 containerd[1595]: 2025-07-01 08:38:40.252 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026720 containerd[1595]: 2025-07-01 08:38:40.255 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e Jul 1 08:38:41.026720 containerd[1595]: 2025-07-01 08:38:40.264 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.351 [ERROR][4170] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-88-128-26) Name="192-168-88-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-88-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.88.128/26", Affinity:(*string)(0xc000417b30), Allocations:[]*int{(*int)(0xc00078ced8), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc000416190), AttrSecondary:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f667675c-mzg28", "timestamp":"2025-07-01 08:38:40.089968601 +0000 UTC"}}}, SequenceNumber:0x184e13d7387424c0, SequenceNumberForAllocation:map[string]uint64{"0":0x184e13d7387424bf}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-88-128-26": the object has been modified; please apply your changes to the latest version and try again Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.351 [INFO][4170] ipam/ipam.go 1247: Failed to update block block=192.168.88.128/26 error=update conflict: IPAMBlock(192-168-88-128-26) handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.444 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.446 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.487 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" host="localhost" Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:41.026779 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" HandleID="k8s-pod-network.1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Workload="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:40.508 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0", GenerateName:"calico-apiserver-56f667675c-", Namespace:"calico-apiserver", SelfLink:"", UID:"041baa15-5621-4055-a53c-77c22a6b659e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f667675c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56f667675c-mzg28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19ea3a1e559", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:40.508 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:40.508 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19ea3a1e559 ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:40.530 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:40.531 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0", GenerateName:"calico-apiserver-56f667675c-", Namespace:"calico-apiserver", SelfLink:"", UID:"041baa15-5621-4055-a53c-77c22a6b659e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f667675c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e", Pod:"calico-apiserver-56f667675c-mzg28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19ea3a1e559", MAC:"0e:20:3a:00:05:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.027055 containerd[1595]: 2025-07-01 08:38:41.017 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" Namespace="calico-apiserver" Pod="calico-apiserver-56f667675c-mzg28" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f667675c--mzg28-eth0" Jul 1 08:38:41.034581 systemd-networkd[1484]: cali1b7bdb6fe7a: Link UP Jul 1 08:38:41.034937 systemd-networkd[1484]: cali1b7bdb6fe7a: Gained carrier Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:39.868 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:39.993 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--prnpp-eth0 csi-node-driver- calico-system 81a57d7c-7149-4271-9274-afe15b367e85 746 0 2025-07-01 08:38:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-prnpp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b7bdb6fe7a [] [] }} ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:39.995 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.089 [INFO][4177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" HandleID="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Workload="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.090 [INFO][4177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" HandleID="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Workload="localhost-k8s-csi--node--driver--prnpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e1990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-prnpp", "timestamp":"2025-07-01 08:38:40.089955877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.090 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.501 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.514 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.529 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.536 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.540 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.544 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.544 [INFO][4177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.546 [INFO][4177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432 Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:40.569 [INFO][4177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:41.019 [INFO][4177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:41.019 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" host="localhost" Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:41.019 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:41.090788 containerd[1595]: 2025-07-01 08:38:41.019 [INFO][4177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" HandleID="k8s-pod-network.ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Workload="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.025 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prnpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81a57d7c-7149-4271-9274-afe15b367e85", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-prnpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b7bdb6fe7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.025 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.025 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b7bdb6fe7a ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.034 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.037 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--prnpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"81a57d7c-7149-4271-9274-afe15b367e85", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432", Pod:"csi-node-driver-prnpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b7bdb6fe7a", MAC:"26:2c:27:78:d0:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.093973 containerd[1595]: 2025-07-01 08:38:41.081 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" Namespace="calico-system" Pod="csi-node-driver-prnpp" WorkloadEndpoint="localhost-k8s-csi--node--driver--prnpp-eth0" Jul 1 08:38:41.158373 containerd[1595]: time="2025-07-01T08:38:41.158308403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" id:\"14e8a005106fc79d1894cb955fb7d5a45dd4955596d3241fafa82dc593800e1a\" pid:4248 exit_status:1 exited_at:{seconds:1751359121 nanos:157978324}" Jul 1 08:38:41.274365 systemd-networkd[1484]: cali2e66cd4e082: Link UP Jul 1 08:38:41.275329 systemd-networkd[1484]: cali2e66cd4e082: Gained carrier Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:40.542 [INFO][4201] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.026 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f7846d754--zvgtw-eth0 whisker-7f7846d754- calico-system 66b501c7-d205-4be7-b310-b89ad5a1f814 963 0 2025-07-01 08:38:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f7846d754 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f7846d754-zvgtw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e66cd4e082 [] [] }} ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.026 [INFO][4201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.071 [INFO][4227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" HandleID="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Workload="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.072 [INFO][4227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" HandleID="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Workload="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001385e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f7846d754-zvgtw", "timestamp":"2025-07-01 08:38:41.071761454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.072 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.072 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.072 [INFO][4227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.085 [INFO][4227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.092 [INFO][4227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.098 [INFO][4227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.102 [INFO][4227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.105 [INFO][4227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.105 [INFO][4227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.107 [INFO][4227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4 Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.251 [INFO][4227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.263 [INFO][4227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.265 [INFO][4227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" host="localhost" Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.266 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:41.306002 containerd[1595]: 2025-07-01 08:38:41.266 [INFO][4227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" HandleID="k8s-pod-network.590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Workload="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.270 [INFO][4201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f7846d754--zvgtw-eth0", GenerateName:"whisker-7f7846d754-", Namespace:"calico-system", SelfLink:"", UID:"66b501c7-d205-4be7-b310-b89ad5a1f814", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f7846d754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f7846d754-zvgtw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e66cd4e082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.270 [INFO][4201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.270 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e66cd4e082 ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.277 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.279 [INFO][4201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f7846d754--zvgtw-eth0", GenerateName:"whisker-7f7846d754-", Namespace:"calico-system", SelfLink:"", UID:"66b501c7-d205-4be7-b310-b89ad5a1f814", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f7846d754", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4", Pod:"whisker-7f7846d754-zvgtw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e66cd4e082", MAC:"da:1c:bd:1c:06:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:41.307241 containerd[1595]: 2025-07-01 08:38:41.298 [INFO][4201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" Namespace="calico-system" Pod="whisker-7f7846d754-zvgtw" WorkloadEndpoint="localhost-k8s-whisker--7f7846d754--zvgtw-eth0" Jul 1 08:38:41.383517 containerd[1595]: time="2025-07-01T08:38:41.383430051Z" level=info msg="connecting to shim 1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e" address="unix:///run/containerd/s/7d846de43aaf2564060392f60a18d6264b5947a916b36bc986cb2a0e3dc9a85f" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:41.386197 containerd[1595]: time="2025-07-01T08:38:41.386108706Z" level=info msg="connecting to shim 590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4" address="unix:///run/containerd/s/1f9793803315eb0422aeebebd425dadb36907b454729483e1eb8381e4fa5e6b9" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:41.387047 containerd[1595]: time="2025-07-01T08:38:41.386736905Z" level=info msg="connecting to shim ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432" address="unix:///run/containerd/s/bd43c8b8257212096b102bc97485aad657b4ed243fe8d789580c7b5deed17014" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:41.454037 systemd[1]: Started cri-containerd-590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4.scope - libcontainer container 590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4. Jul 1 08:38:41.462641 systemd[1]: Started cri-containerd-1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e.scope - libcontainer container 1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e. Jul 1 08:38:41.465765 systemd[1]: Started cri-containerd-ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432.scope - libcontainer container ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432. Jul 1 08:38:41.502034 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:41.510280 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:41.519899 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:42.057179 containerd[1595]: time="2025-07-01T08:38:42.057074512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-prnpp,Uid:81a57d7c-7149-4271-9274-afe15b367e85,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432\"" Jul 1 08:38:42.066279 systemd-networkd[1484]: vxlan.calico: Link UP Jul 1 08:38:42.066293 systemd-networkd[1484]: vxlan.calico: Gained carrier Jul 1 08:38:42.146418 containerd[1595]: time="2025-07-01T08:38:42.146372506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 1 08:38:42.194013 systemd-networkd[1484]: cali19ea3a1e559: Gained IPv6LL Jul 1 08:38:42.257853 systemd-networkd[1484]: cali1b7bdb6fe7a: Gained IPv6LL Jul 1 08:38:42.622839 containerd[1595]: time="2025-07-01T08:38:42.622605428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f667675c-mzg28,Uid:041baa15-5621-4055-a53c-77c22a6b659e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e\"" Jul 1 08:38:42.764181 containerd[1595]: time="2025-07-01T08:38:42.764110944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f7846d754-zvgtw,Uid:66b501c7-d205-4be7-b310-b89ad5a1f814,Namespace:calico-system,Attempt:0,} returns sandbox id \"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4\"" Jul 1 08:38:43.090000 systemd-networkd[1484]: cali2e66cd4e082: Gained IPv6LL Jul 1 08:38:43.473891 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Jul 1 08:38:45.420552 systemd[1]: Started sshd@7-10.0.0.80:22-10.0.0.1:57978.service - OpenSSH per-connection server daemon (10.0.0.1:57978). Jul 1 08:38:45.624133 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 57978 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:38:45.626092 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:45.632430 systemd-logind[1560]: New session 8 of user core. Jul 1 08:38:45.642967 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 1 08:38:46.266801 sshd[4611]: Connection closed by 10.0.0.1 port 57978 Jul 1 08:38:46.267209 sshd-session[4608]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:46.272615 systemd[1]: sshd@7-10.0.0.80:22-10.0.0.1:57978.service: Deactivated successfully. Jul 1 08:38:46.275202 systemd[1]: session-8.scope: Deactivated successfully. Jul 1 08:38:46.276537 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jul 1 08:38:46.277990 systemd-logind[1560]: Removed session 8. Jul 1 08:38:47.746542 kubelet[2724]: E0701 08:38:47.746470 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:47.747290 containerd[1595]: time="2025-07-01T08:38:47.746959795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:48.093578 systemd-networkd[1484]: caliaabc492c21b: Link UP Jul 1 08:38:48.093815 systemd-networkd[1484]: caliaabc492c21b: Gained carrier Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.895 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0 coredns-7c65d6cfc9- kube-system 68c217b9-4f4d-48d1-bb9a-d276adb2fb78 859 0 2025-07-01 08:37:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wtpdv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaabc492c21b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.895 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.942 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" HandleID="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Workload="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.942 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" HandleID="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Workload="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000494500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wtpdv", "timestamp":"2025-07-01 08:38:47.942088658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.942 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.942 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.942 [INFO][4649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.949 [INFO][4649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.954 [INFO][4649] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.958 [INFO][4649] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.960 [INFO][4649] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.962 [INFO][4649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.962 [INFO][4649] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:47.964 [INFO][4649] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:48.007 [INFO][4649] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:48.087 [INFO][4649] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:48.087 [INFO][4649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" host="localhost" Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:48.087 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:48.133722 containerd[1595]: 2025-07-01 08:38:48.087 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" HandleID="k8s-pod-network.11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Workload="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.134550 containerd[1595]: 2025-07-01 08:38:48.090 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68c217b9-4f4d-48d1-bb9a-d276adb2fb78", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wtpdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaabc492c21b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:48.134550 containerd[1595]: 2025-07-01 08:38:48.090 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.134550 containerd[1595]: 2025-07-01 08:38:48.091 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaabc492c21b ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.134550 containerd[1595]: 2025-07-01 08:38:48.094 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.134550 containerd[1595]: 2025-07-01 08:38:48.094 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68c217b9-4f4d-48d1-bb9a-d276adb2fb78", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df", Pod:"coredns-7c65d6cfc9-wtpdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaabc492c21b", MAC:"c2:01:4d:bc:08:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:48.134842 containerd[1595]: 2025-07-01 08:38:48.130 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wtpdv" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wtpdv-eth0" Jul 1 08:38:48.520611 containerd[1595]: time="2025-07-01T08:38:48.520005485Z" level=info msg="connecting to shim 11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df" address="unix:///run/containerd/s/a98ce29df8e2617ae5a601ff6adaaadeb71f219282cc9dc64eb384699667feb5" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:48.561199 systemd[1]: Started cri-containerd-11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df.scope - libcontainer container 11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df. Jul 1 08:38:48.581593 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:48.641697 containerd[1595]: time="2025-07-01T08:38:48.641624789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wtpdv,Uid:68c217b9-4f4d-48d1-bb9a-d276adb2fb78,Namespace:kube-system,Attempt:0,} returns sandbox id \"11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df\"" Jul 1 08:38:48.642596 kubelet[2724]: E0701 08:38:48.642534 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:48.646151 containerd[1595]: time="2025-07-01T08:38:48.645980849Z" level=info msg="CreateContainer within sandbox \"11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:38:48.709938 containerd[1595]: time="2025-07-01T08:38:48.709826643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:48.747732 containerd[1595]: time="2025-07-01T08:38:48.747648695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:48.793760 containerd[1595]: time="2025-07-01T08:38:48.793712678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 1 08:38:48.816952 containerd[1595]: time="2025-07-01T08:38:48.816886393Z" level=info msg="Container b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:48.817563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197264569.mount: Deactivated successfully. Jul 1 08:38:48.822111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802602469.mount: Deactivated successfully. Jul 1 08:38:48.828267 containerd[1595]: time="2025-07-01T08:38:48.828149589Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:49.039455 systemd-networkd[1484]: cali47819b53430: Link UP Jul 1 08:38:49.039740 systemd-networkd[1484]: cali47819b53430: Gained carrier Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.892 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0 calico-apiserver-785dd9b466- calico-apiserver 956b37b9-1ba9-40e9-be7f-b28196b02c8c 866 0 2025-07-01 08:38:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785dd9b466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-785dd9b466-gfqj5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali47819b53430 [] [] }} ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.892 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.919 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.919 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502850), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-785dd9b466-gfqj5", "timestamp":"2025-07-01 08:38:48.919591726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.919 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.919 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.919 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.925 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.929 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.933 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.934 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.936 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.936 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.937 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7 Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:48.951 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:49.034 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:49.034 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" host="localhost" Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:49.034 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:49.095318 containerd[1595]: 2025-07-01 08:38:49.034 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.037 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0", GenerateName:"calico-apiserver-785dd9b466-", Namespace:"calico-apiserver", SelfLink:"", UID:"956b37b9-1ba9-40e9-be7f-b28196b02c8c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785dd9b466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-785dd9b466-gfqj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47819b53430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.037 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.037 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47819b53430 ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.039 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.039 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0", GenerateName:"calico-apiserver-785dd9b466-", Namespace:"calico-apiserver", SelfLink:"", UID:"956b37b9-1ba9-40e9-be7f-b28196b02c8c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785dd9b466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7", Pod:"calico-apiserver-785dd9b466-gfqj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali47819b53430", MAC:"7e:43:51:f4:86:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:49.096069 containerd[1595]: 2025-07-01 08:38:49.091 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-gfqj5" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:38:49.375987 containerd[1595]: time="2025-07-01T08:38:49.375177175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:49.375987 containerd[1595]: time="2025-07-01T08:38:49.375955540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 7.22953869s" Jul 1 08:38:49.375987 containerd[1595]: time="2025-07-01T08:38:49.375987893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 1 08:38:49.377543 containerd[1595]: time="2025-07-01T08:38:49.377355036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:38:49.379205 containerd[1595]: time="2025-07-01T08:38:49.379158323Z" level=info msg="CreateContainer within sandbox \"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 1 08:38:49.383280 containerd[1595]: time="2025-07-01T08:38:49.383219335Z" level=info msg="CreateContainer within sandbox \"11414688970bc31f2f918d84677e53d0463eef2d2277fce82d34962e2c6554df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0\"" Jul 1 08:38:49.383872 containerd[1595]: time="2025-07-01T08:38:49.383839734Z" level=info msg="StartContainer for \"b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0\"" Jul 1 08:38:49.384826 containerd[1595]: time="2025-07-01T08:38:49.384733071Z" level=info msg="connecting to shim b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0" address="unix:///run/containerd/s/a98ce29df8e2617ae5a601ff6adaaadeb71f219282cc9dc64eb384699667feb5" protocol=ttrpc version=3 Jul 1 08:38:49.410894 systemd[1]: Started cri-containerd-b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0.scope - libcontainer container b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0. Jul 1 08:38:49.553926 systemd-networkd[1484]: caliaabc492c21b: Gained IPv6LL Jul 1 08:38:49.768087 containerd[1595]: time="2025-07-01T08:38:49.767917857Z" level=info msg="StartContainer for \"b627cce738587a8374cfa3c4bdc8c85884826ea68777aedcdcbbda0335cf5dd0\" returns successfully" Jul 1 08:38:49.876392 containerd[1595]: time="2025-07-01T08:38:49.876319605Z" level=info msg="Container fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:49.915695 containerd[1595]: time="2025-07-01T08:38:49.915624051Z" level=info msg="CreateContainer within sandbox \"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d\"" Jul 1 08:38:49.916885 containerd[1595]: time="2025-07-01T08:38:49.916816426Z" level=info msg="StartContainer for \"fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d\"" Jul 1 08:38:49.919007 containerd[1595]: time="2025-07-01T08:38:49.918940994Z" level=info msg="connecting to shim fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d" address="unix:///run/containerd/s/bd43c8b8257212096b102bc97485aad657b4ed243fe8d789580c7b5deed17014" protocol=ttrpc version=3 Jul 1 08:38:49.943193 containerd[1595]: time="2025-07-01T08:38:49.943055350Z" level=info msg="connecting to shim 6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" address="unix:///run/containerd/s/2b2371b169156b3411cd8196c98c605e1159a8b17cb9a24ba774c89d698ea627" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:49.952197 systemd[1]: Started cri-containerd-fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d.scope - libcontainer container fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d. Jul 1 08:38:49.979288 kubelet[2724]: E0701 08:38:49.978962 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:49.991152 systemd[1]: Started cri-containerd-6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7.scope - libcontainer container 6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7. Jul 1 08:38:49.996459 kubelet[2724]: I0701 08:38:49.996388 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wtpdv" podStartSLOduration=53.996365867 podStartE2EDuration="53.996365867s" podCreationTimestamp="2025-07-01 08:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:49.996226859 +0000 UTC m=+61.329457630" watchObservedRunningTime="2025-07-01 08:38:49.996365867 +0000 UTC m=+61.329596668" Jul 1 08:38:50.018528 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:50.078071 containerd[1595]: time="2025-07-01T08:38:50.077998491Z" level=info msg="StartContainer for \"fbd8ed5c33f683ca928e73c353c0f3e2f8a0a1c31de38710831a94125819f24d\" returns successfully" Jul 1 08:38:50.081091 containerd[1595]: time="2025-07-01T08:38:50.081040168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-gfqj5,Uid:956b37b9-1ba9-40e9-be7f-b28196b02c8c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\"" Jul 1 08:38:50.747227 kubelet[2724]: E0701 08:38:50.746769 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:50.747474 containerd[1595]: time="2025-07-01T08:38:50.747369869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:50.748028 containerd[1595]: time="2025-07-01T08:38:50.747982032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,}" Jul 1 08:38:50.898034 systemd-networkd[1484]: cali47819b53430: Gained IPv6LL Jul 1 08:38:50.987628 kubelet[2724]: E0701 08:38:50.987568 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:51.127920 systemd-networkd[1484]: cali26233a911cc: Link UP Jul 1 08:38:51.128509 systemd-networkd[1484]: cali26233a911cc: Gained carrier Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.911 [INFO][4871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0 goldmane-58fd7646b9- calico-system 032f515b-c70e-4420-9aba-ae73ba857da9 867 0 2025-07-01 08:38:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-fg4hg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali26233a911cc [] [] }} ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.911 [INFO][4871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.948 [INFO][4908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" HandleID="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Workload="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.948 [INFO][4908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" HandleID="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Workload="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-fg4hg", "timestamp":"2025-07-01 08:38:50.948124029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.948 [INFO][4908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.949 [INFO][4908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.949 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.960 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.966 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.971 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.973 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.977 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.977 [INFO][4908] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:50.980 [INFO][4908] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:51.014 [INFO][4908] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:51.120 [INFO][4908] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:51.120 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" host="localhost" Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:51.120 [INFO][4908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:51.176712 containerd[1595]: 2025-07-01 08:38:51.120 [INFO][4908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" HandleID="k8s-pod-network.97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Workload="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.125 [INFO][4871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"032f515b-c70e-4420-9aba-ae73ba857da9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-fg4hg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26233a911cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.125 [INFO][4871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.125 [INFO][4871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26233a911cc ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.128 [INFO][4871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.129 [INFO][4871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"032f515b-c70e-4420-9aba-ae73ba857da9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e", Pod:"goldmane-58fd7646b9-fg4hg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali26233a911cc", MAC:"da:68:a1:c4:57:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:51.180017 containerd[1595]: 2025-07-01 08:38:51.171 [INFO][4871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" Namespace="calico-system" Pod="goldmane-58fd7646b9-fg4hg" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--fg4hg-eth0" Jul 1 08:38:51.295831 systemd[1]: Started sshd@8-10.0.0.80:22-10.0.0.1:51866.service - OpenSSH per-connection server daemon (10.0.0.1:51866). Jul 1 08:38:51.409537 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:38:51.411929 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:51.416973 systemd-logind[1560]: New session 9 of user core. Jul 1 08:38:51.424885 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 1 08:38:51.434053 systemd-networkd[1484]: cali9754fd2ad65: Link UP Jul 1 08:38:51.436992 systemd-networkd[1484]: cali9754fd2ad65: Gained carrier Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:50.891 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0 coredns-7c65d6cfc9- kube-system 96d984ee-a0f3-4d6a-a438-b9f5756b5666 868 0 2025-07-01 08:37:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-r2l9j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9754fd2ad65 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:50.892 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.018 [INFO][4901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" HandleID="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Workload="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.018 [INFO][4901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" HandleID="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Workload="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-r2l9j", "timestamp":"2025-07-01 08:38:51.018224179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.018 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.120 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.121 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.171 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.193 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.205 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.208 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.211 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.211 [INFO][4901] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.212 [INFO][4901] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939 Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.337 [INFO][4901] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.427 [INFO][4901] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.427 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" host="localhost" Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.427 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:51.484365 containerd[1595]: 2025-07-01 08:38:51.427 [INFO][4901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" HandleID="k8s-pod-network.556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Workload="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.485205 containerd[1595]: 2025-07-01 08:38:51.431 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"96d984ee-a0f3-4d6a-a438-b9f5756b5666", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-r2l9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9754fd2ad65", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:51.485205 containerd[1595]: 2025-07-01 08:38:51.431 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.485205 containerd[1595]: 2025-07-01 08:38:51.431 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9754fd2ad65 ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.485205 containerd[1595]: 2025-07-01 08:38:51.435 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.485205 containerd[1595]: 2025-07-01 08:38:51.439 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"96d984ee-a0f3-4d6a-a438-b9f5756b5666", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 37, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939", Pod:"coredns-7c65d6cfc9-r2l9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9754fd2ad65", MAC:"b2:95:8c:65:c2:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:51.485446 containerd[1595]: 2025-07-01 08:38:51.479 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" Namespace="kube-system" Pod="coredns-7c65d6cfc9-r2l9j" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--r2l9j-eth0" Jul 1 08:38:51.655601 containerd[1595]: time="2025-07-01T08:38:51.655549234Z" level=info msg="connecting to shim 97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e" address="unix:///run/containerd/s/1f7f2abd25399024d69a5b270db131d272335bce5b6bebb0a81c084bef05ec3a" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:51.659276 sshd[4936]: Connection closed by 10.0.0.1 port 51866 Jul 1 08:38:51.659614 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:51.671971 systemd[1]: sshd@8-10.0.0.80:22-10.0.0.1:51866.service: Deactivated successfully. Jul 1 08:38:51.675025 systemd[1]: session-9.scope: Deactivated successfully. Jul 1 08:38:51.677945 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jul 1 08:38:51.682077 systemd-logind[1560]: Removed session 9. Jul 1 08:38:51.693953 systemd[1]: Started cri-containerd-97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e.scope - libcontainer container 97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e. Jul 1 08:38:51.711048 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:51.747650 containerd[1595]: time="2025-07-01T08:38:51.747607839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,}" Jul 1 08:38:51.747795 containerd[1595]: time="2025-07-01T08:38:51.747668477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,}" Jul 1 08:38:51.859371 containerd[1595]: time="2025-07-01T08:38:51.859319273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-fg4hg,Uid:032f515b-c70e-4420-9aba-ae73ba857da9,Namespace:calico-system,Attempt:0,} returns sandbox id \"97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e\"" Jul 1 08:38:51.991001 kubelet[2724]: E0701 08:38:51.990575 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:52.361917 systemd-networkd[1484]: cali4ad64b1ccf1: Link UP Jul 1 08:38:52.363396 systemd-networkd[1484]: cali4ad64b1ccf1: Gained carrier Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.155 [INFO][5005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0 calico-apiserver-785dd9b466- calico-apiserver 9c8054ba-10de-47da-9909-9fedeb482d2a 863 0 2025-07-01 08:38:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785dd9b466 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-785dd9b466-97bdw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ad64b1ccf1 [] [] }} ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.155 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.191 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.191 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3420), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-785dd9b466-97bdw", "timestamp":"2025-07-01 08:38:52.19103704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.191 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.191 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.191 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.198 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.203 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.207 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.208 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.210 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.210 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.212 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.225 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.354 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.354 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" host="localhost" Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.354 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:52.459009 containerd[1595]: 2025-07-01 08:38:52.354 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.358 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0", GenerateName:"calico-apiserver-785dd9b466-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8054ba-10de-47da-9909-9fedeb482d2a", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785dd9b466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-785dd9b466-97bdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ad64b1ccf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.358 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.358 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ad64b1ccf1 ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.364 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.365 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0", GenerateName:"calico-apiserver-785dd9b466-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c8054ba-10de-47da-9909-9fedeb482d2a", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785dd9b466", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d", Pod:"calico-apiserver-785dd9b466-97bdw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ad64b1ccf1", MAC:"6e:ec:a1:2c:83:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:52.460258 containerd[1595]: 2025-07-01 08:38:52.455 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Namespace="calico-apiserver" Pod="calico-apiserver-785dd9b466-97bdw" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:38:52.585417 systemd-networkd[1484]: cali7220622da4c: Link UP Jul 1 08:38:52.586668 systemd-networkd[1484]: cali7220622da4c: Gained carrier Jul 1 08:38:52.654209 containerd[1595]: time="2025-07-01T08:38:52.654031696Z" level=info msg="connecting to shim 556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939" address="unix:///run/containerd/s/9dbb910f56975b65afd6bfac9d0244f81c872a175c63edda129154f3bf531959" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.352 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0 calico-kube-controllers-6cc74d4c7f- calico-system 3a670ab3-ff29-4888-96ee-f1733e954198 869 0 2025-07-01 08:38:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cc74d4c7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cc74d4c7f-4blk4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7220622da4c [] [] }} ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.352 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.436 [INFO][5045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" HandleID="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Workload="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.436 [INFO][5045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" HandleID="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Workload="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cc74d4c7f-4blk4", "timestamp":"2025-07-01 08:38:52.43664512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.436 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.437 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.437 [INFO][5045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.508 [INFO][5045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.513 [INFO][5045] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.519 [INFO][5045] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.521 [INFO][5045] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.524 [INFO][5045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.524 [INFO][5045] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.526 [INFO][5045] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.553 [INFO][5045] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.579 [INFO][5045] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.579 [INFO][5045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" host="localhost" Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.579 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:38:52.712046 containerd[1595]: 2025-07-01 08:38:52.580 [INFO][5045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" HandleID="k8s-pod-network.3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Workload="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.583 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0", GenerateName:"calico-kube-controllers-6cc74d4c7f-", Namespace:"calico-system", SelfLink:"", UID:"3a670ab3-ff29-4888-96ee-f1733e954198", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc74d4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cc74d4c7f-4blk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7220622da4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.583 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.583 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7220622da4c ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.586 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.587 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0", GenerateName:"calico-kube-controllers-6cc74d4c7f-", Namespace:"calico-system", SelfLink:"", UID:"3a670ab3-ff29-4888-96ee-f1733e954198", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 1, 8, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cc74d4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea", Pod:"calico-kube-controllers-6cc74d4c7f-4blk4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7220622da4c", MAC:"46:a9:a1:1e:9d:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 1 08:38:52.713733 containerd[1595]: 2025-07-01 08:38:52.707 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" Namespace="calico-system" Pod="calico-kube-controllers-6cc74d4c7f-4blk4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cc74d4c7f--4blk4-eth0" Jul 1 08:38:52.714938 systemd[1]: Started cri-containerd-556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939.scope - libcontainer container 556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939. Jul 1 08:38:52.737912 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:52.945987 systemd-networkd[1484]: cali9754fd2ad65: Gained IPv6LL Jul 1 08:38:53.125114 containerd[1595]: time="2025-07-01T08:38:53.125062522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r2l9j,Uid:96d984ee-a0f3-4d6a-a438-b9f5756b5666,Namespace:kube-system,Attempt:0,} returns sandbox id \"556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939\"" Jul 1 08:38:53.126036 kubelet[2724]: E0701 08:38:53.126005 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:53.128470 containerd[1595]: time="2025-07-01T08:38:53.128437477Z" level=info msg="CreateContainer within sandbox \"556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 1 08:38:53.137860 systemd-networkd[1484]: cali26233a911cc: Gained IPv6LL Jul 1 08:38:53.448564 containerd[1595]: time="2025-07-01T08:38:53.448501784Z" level=info msg="connecting to shim f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" address="unix:///run/containerd/s/fb123c595e0d99604641d183b3aabff81b94cfd9fc159d1abdeee317f3c49d68" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:53.494067 systemd[1]: Started cri-containerd-f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d.scope - libcontainer container f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d. Jul 1 08:38:53.519659 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:53.523448 containerd[1595]: time="2025-07-01T08:38:53.520446779Z" level=info msg="Container ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:53.585739 containerd[1595]: time="2025-07-01T08:38:53.585648338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785dd9b466-97bdw,Uid:9c8054ba-10de-47da-9909-9fedeb482d2a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\"" Jul 1 08:38:53.613496 containerd[1595]: time="2025-07-01T08:38:53.613385999Z" level=info msg="CreateContainer within sandbox \"556fc88fc6b77119c3065689871e2f72381d1c54f1ee28146f38f93fe701d939\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558\"" Jul 1 08:38:53.614515 containerd[1595]: time="2025-07-01T08:38:53.614416085Z" level=info msg="StartContainer for \"ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558\"" Jul 1 08:38:53.614603 containerd[1595]: time="2025-07-01T08:38:53.614484135Z" level=info msg="connecting to shim 3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea" address="unix:///run/containerd/s/fff60e561d9d37020af05553c50e385d22094e47cb42bd4c15a266748273efe6" namespace=k8s.io protocol=ttrpc version=3 Jul 1 08:38:53.620254 containerd[1595]: time="2025-07-01T08:38:53.620177960Z" level=info msg="connecting to shim ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558" address="unix:///run/containerd/s/9dbb910f56975b65afd6bfac9d0244f81c872a175c63edda129154f3bf531959" protocol=ttrpc version=3 Jul 1 08:38:53.652062 systemd[1]: Started cri-containerd-3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea.scope - libcontainer container 3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea. Jul 1 08:38:53.655933 systemd[1]: Started cri-containerd-ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558.scope - libcontainer container ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558. Jul 1 08:38:53.685751 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 1 08:38:53.971509 systemd-networkd[1484]: cali7220622da4c: Gained IPv6LL Jul 1 08:38:54.098375 systemd-networkd[1484]: cali4ad64b1ccf1: Gained IPv6LL Jul 1 08:38:54.190176 containerd[1595]: time="2025-07-01T08:38:54.190127424Z" level=info msg="StartContainer for \"ecf78595fbe64f9dc10806f7f4f1f6d96f0ad76a239321bf84d73890445a9558\" returns successfully" Jul 1 08:38:54.219188 containerd[1595]: time="2025-07-01T08:38:54.219114943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cc74d4c7f-4blk4,Uid:3a670ab3-ff29-4888-96ee-f1733e954198,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea\"" Jul 1 08:38:54.420716 containerd[1595]: time="2025-07-01T08:38:54.420617200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:54.427003 containerd[1595]: time="2025-07-01T08:38:54.426857088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 1 08:38:54.428592 containerd[1595]: time="2025-07-01T08:38:54.428341588Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:54.481050 containerd[1595]: time="2025-07-01T08:38:54.480963225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:54.481970 containerd[1595]: time="2025-07-01T08:38:54.481857598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.104467353s" Jul 1 08:38:54.481970 containerd[1595]: time="2025-07-01T08:38:54.481927612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:38:54.483477 containerd[1595]: time="2025-07-01T08:38:54.483438332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 1 08:38:54.485749 containerd[1595]: time="2025-07-01T08:38:54.485124970Z" level=info msg="CreateContainer within sandbox \"1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:38:54.646213 containerd[1595]: time="2025-07-01T08:38:54.646142626Z" level=info msg="Container 62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:54.666997 containerd[1595]: time="2025-07-01T08:38:54.666905835Z" level=info msg="CreateContainer within sandbox \"1ac3ffaebba0ad86dc72b34b8fdab7938238b0244bcff3ed51b769b61fc9f43e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c\"" Jul 1 08:38:54.667616 containerd[1595]: time="2025-07-01T08:38:54.667577658Z" level=info msg="StartContainer for \"62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c\"" Jul 1 08:38:54.668895 containerd[1595]: time="2025-07-01T08:38:54.668859498Z" level=info msg="connecting to shim 62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c" address="unix:///run/containerd/s/7d846de43aaf2564060392f60a18d6264b5947a916b36bc986cb2a0e3dc9a85f" protocol=ttrpc version=3 Jul 1 08:38:54.698167 systemd[1]: Started cri-containerd-62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c.scope - libcontainer container 62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c. Jul 1 08:38:54.827150 containerd[1595]: time="2025-07-01T08:38:54.827085907Z" level=info msg="StartContainer for \"62745ba16a319e92123459a06a0a9691ab48214dc7e0eadc78f076bedf0c457c\" returns successfully" Jul 1 08:38:55.200704 kubelet[2724]: E0701 08:38:55.200235 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:55.265232 kubelet[2724]: I0701 08:38:55.265099 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r2l9j" podStartSLOduration=60.265073135 podStartE2EDuration="1m0.265073135s" podCreationTimestamp="2025-07-01 08:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-01 08:38:55.264555368 +0000 UTC m=+66.597786159" watchObservedRunningTime="2025-07-01 08:38:55.265073135 +0000 UTC m=+66.598303926" Jul 1 08:38:55.325591 kubelet[2724]: I0701 08:38:55.325489 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56f667675c-mzg28" podStartSLOduration=38.466652527 podStartE2EDuration="50.325422443s" podCreationTimestamp="2025-07-01 08:38:05 +0000 UTC" firstStartedPulling="2025-07-01 08:38:42.624514209 +0000 UTC m=+53.957744990" lastFinishedPulling="2025-07-01 08:38:54.483284095 +0000 UTC m=+65.816514906" observedRunningTime="2025-07-01 08:38:55.322871103 +0000 UTC m=+66.656101894" watchObservedRunningTime="2025-07-01 08:38:55.325422443 +0000 UTC m=+66.658653224" Jul 1 08:38:56.202632 kubelet[2724]: E0701 08:38:56.202594 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:56.203376 kubelet[2724]: I0701 08:38:56.202860 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:38:56.640043 containerd[1595]: time="2025-07-01T08:38:56.639967906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:56.640824 containerd[1595]: time="2025-07-01T08:38:56.640767975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 1 08:38:56.642123 containerd[1595]: time="2025-07-01T08:38:56.642069027Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:56.644761 containerd[1595]: time="2025-07-01T08:38:56.644721117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:56.645350 containerd[1595]: time="2025-07-01T08:38:56.645293047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.16182624s" Jul 1 08:38:56.645350 containerd[1595]: time="2025-07-01T08:38:56.645341551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 1 08:38:56.648264 containerd[1595]: time="2025-07-01T08:38:56.648137488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 1 08:38:56.651037 containerd[1595]: time="2025-07-01T08:38:56.650990325Z" level=info msg="CreateContainer within sandbox \"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 1 08:38:56.660077 containerd[1595]: time="2025-07-01T08:38:56.660018068Z" level=info msg="Container 058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:56.669494 containerd[1595]: time="2025-07-01T08:38:56.669449266Z" level=info msg="CreateContainer within sandbox \"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a\"" Jul 1 08:38:56.670091 containerd[1595]: time="2025-07-01T08:38:56.670050032Z" level=info msg="StartContainer for \"058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a\"" Jul 1 08:38:56.671345 containerd[1595]: time="2025-07-01T08:38:56.671274707Z" level=info msg="connecting to shim 058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a" address="unix:///run/containerd/s/1f9793803315eb0422aeebebd425dadb36907b454729483e1eb8381e4fa5e6b9" protocol=ttrpc version=3 Jul 1 08:38:56.684897 systemd[1]: Started sshd@9-10.0.0.80:22-10.0.0.1:51882.service - OpenSSH per-connection server daemon (10.0.0.1:51882). Jul 1 08:38:56.700126 systemd[1]: Started cri-containerd-058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a.scope - libcontainer container 058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a. Jul 1 08:38:56.820073 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 51882 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:38:56.822851 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:38:56.826902 containerd[1595]: time="2025-07-01T08:38:56.826781999Z" level=info msg="StartContainer for \"058df9d429dabd59381db69a82f826c5da6586679c49f6cc778376cd19db564a\" returns successfully" Jul 1 08:38:56.836768 systemd-logind[1560]: New session 10 of user core. Jul 1 08:38:56.840056 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 1 08:38:57.157102 sshd[5335]: Connection closed by 10.0.0.1 port 51882 Jul 1 08:38:57.157525 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Jul 1 08:38:57.163080 systemd[1]: sshd@9-10.0.0.80:22-10.0.0.1:51882.service: Deactivated successfully. Jul 1 08:38:57.166395 systemd[1]: session-10.scope: Deactivated successfully. Jul 1 08:38:57.168242 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jul 1 08:38:57.170026 systemd-logind[1560]: Removed session 10. Jul 1 08:38:57.206859 kubelet[2724]: E0701 08:38:57.206804 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:38:59.281455 containerd[1595]: time="2025-07-01T08:38:59.281355264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:59.282581 containerd[1595]: time="2025-07-01T08:38:59.282542483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 1 08:38:59.284618 containerd[1595]: time="2025-07-01T08:38:59.284569263Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:59.287259 containerd[1595]: time="2025-07-01T08:38:59.287222296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:38:59.287829 containerd[1595]: time="2025-07-01T08:38:59.287795926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.639605778s" Jul 1 08:38:59.287917 containerd[1595]: time="2025-07-01T08:38:59.287831565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 1 08:38:59.289020 containerd[1595]: time="2025-07-01T08:38:59.288979539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:38:59.290528 containerd[1595]: time="2025-07-01T08:38:59.290477715Z" level=info msg="CreateContainer within sandbox \"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 1 08:38:59.302668 containerd[1595]: time="2025-07-01T08:38:59.302593310Z" level=info msg="Container f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:38:59.318567 containerd[1595]: time="2025-07-01T08:38:59.318480697Z" level=info msg="CreateContainer within sandbox \"ecb1f038eeaf0e0b2332606726c99e04dcc104b4d810e2794986e9002c8d0432\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649\"" Jul 1 08:38:59.319260 containerd[1595]: time="2025-07-01T08:38:59.319204276Z" level=info msg="StartContainer for \"f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649\"" Jul 1 08:38:59.321241 containerd[1595]: time="2025-07-01T08:38:59.321210307Z" level=info msg="connecting to shim f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649" address="unix:///run/containerd/s/bd43c8b8257212096b102bc97485aad657b4ed243fe8d789580c7b5deed17014" protocol=ttrpc version=3 Jul 1 08:38:59.420888 systemd[1]: Started cri-containerd-f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649.scope - libcontainer container f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649. Jul 1 08:38:59.479072 containerd[1595]: time="2025-07-01T08:38:59.478988375Z" level=info msg="StartContainer for \"f25fef0ecdf28909c035be59add8ecd59761bc313fe7d37483418314a65c7649\" returns successfully" Jul 1 08:38:59.825693 kubelet[2724]: I0701 08:38:59.825643 2724 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 1 08:38:59.826241 kubelet[2724]: I0701 08:38:59.825717 2724 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 1 08:39:00.861317 containerd[1595]: time="2025-07-01T08:39:00.861232554Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:00.917979 containerd[1595]: time="2025-07-01T08:39:00.917869042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 1 08:39:00.931076 containerd[1595]: time="2025-07-01T08:39:00.931002818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.641984004s" Jul 1 08:39:00.931076 containerd[1595]: time="2025-07-01T08:39:00.931053495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:39:00.932583 containerd[1595]: time="2025-07-01T08:39:00.932521271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 1 08:39:00.933885 containerd[1595]: time="2025-07-01T08:39:00.933850290Z" level=info msg="CreateContainer within sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:39:01.720880 containerd[1595]: time="2025-07-01T08:39:01.720789424Z" level=info msg="Container 5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:02.128265 containerd[1595]: time="2025-07-01T08:39:02.128201836Z" level=info msg="CreateContainer within sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\"" Jul 1 08:39:02.128989 containerd[1595]: time="2025-07-01T08:39:02.128934650Z" level=info msg="StartContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\"" Jul 1 08:39:02.130470 containerd[1595]: time="2025-07-01T08:39:02.130411960Z" level=info msg="connecting to shim 5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c" address="unix:///run/containerd/s/2b2371b169156b3411cd8196c98c605e1159a8b17cb9a24ba774c89d698ea627" protocol=ttrpc version=3 Jul 1 08:39:02.162074 systemd[1]: Started cri-containerd-5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c.scope - libcontainer container 5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c. Jul 1 08:39:02.172888 systemd[1]: Started sshd@10-10.0.0.80:22-10.0.0.1:43426.service - OpenSSH per-connection server daemon (10.0.0.1:43426). Jul 1 08:39:02.247162 sshd[5409]: Accepted publickey for core from 10.0.0.1 port 43426 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:02.262901 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:02.271593 systemd-logind[1560]: New session 11 of user core. Jul 1 08:39:02.278853 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 1 08:39:02.293385 containerd[1595]: time="2025-07-01T08:39:02.293277400Z" level=info msg="StartContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" returns successfully" Jul 1 08:39:02.459202 sshd[5434]: Connection closed by 10.0.0.1 port 43426 Jul 1 08:39:02.461906 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:02.467196 systemd[1]: sshd@10-10.0.0.80:22-10.0.0.1:43426.service: Deactivated successfully. Jul 1 08:39:02.470194 systemd[1]: session-11.scope: Deactivated successfully. Jul 1 08:39:02.471497 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jul 1 08:39:02.474470 systemd-logind[1560]: Removed session 11. Jul 1 08:39:03.725702 kubelet[2724]: I0701 08:39:03.725610 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785dd9b466-gfqj5" podStartSLOduration=48.876385636 podStartE2EDuration="59.725589258s" podCreationTimestamp="2025-07-01 08:38:04 +0000 UTC" firstStartedPulling="2025-07-01 08:38:50.082689714 +0000 UTC m=+61.415920495" lastFinishedPulling="2025-07-01 08:39:00.931893316 +0000 UTC m=+72.265124117" observedRunningTime="2025-07-01 08:39:03.724566479 +0000 UTC m=+75.057797260" watchObservedRunningTime="2025-07-01 08:39:03.725589258 +0000 UTC m=+75.058820039" Jul 1 08:39:03.726402 kubelet[2724]: I0701 08:39:03.725788 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-prnpp" podStartSLOduration=38.581705072 podStartE2EDuration="55.725779772s" podCreationTimestamp="2025-07-01 08:38:08 +0000 UTC" firstStartedPulling="2025-07-01 08:38:42.144774478 +0000 UTC m=+53.478005249" lastFinishedPulling="2025-07-01 08:38:59.288849168 +0000 UTC m=+70.622079949" observedRunningTime="2025-07-01 08:39:00.466208668 +0000 UTC m=+71.799439459" watchObservedRunningTime="2025-07-01 08:39:03.725779772 +0000 UTC m=+75.059010553" Jul 1 08:39:04.747494 kubelet[2724]: E0701 08:39:04.747429 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:39:07.184374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581406307.mount: Deactivated successfully. Jul 1 08:39:07.473546 systemd[1]: Started sshd@11-10.0.0.80:22-10.0.0.1:43438.service - OpenSSH per-connection server daemon (10.0.0.1:43438). Jul 1 08:39:08.062054 kubelet[2724]: I0701 08:39:08.061989 2724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 1 08:39:08.208482 sshd[5467]: Accepted publickey for core from 10.0.0.1 port 43438 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:08.210542 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:08.216126 systemd-logind[1560]: New session 12 of user core. Jul 1 08:39:08.222934 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 1 08:39:08.693606 sshd[5471]: Connection closed by 10.0.0.1 port 43438 Jul 1 08:39:08.695476 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:08.705199 systemd[1]: sshd@11-10.0.0.80:22-10.0.0.1:43438.service: Deactivated successfully. Jul 1 08:39:08.707663 systemd[1]: session-12.scope: Deactivated successfully. Jul 1 08:39:08.708839 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jul 1 08:39:08.712358 systemd[1]: Started sshd@12-10.0.0.80:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). Jul 1 08:39:08.713112 systemd-logind[1560]: Removed session 12. Jul 1 08:39:08.780200 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:08.785421 systemd-logind[1560]: New session 13 of user core. Jul 1 08:39:08.882583 sshd[5491]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:08.795917 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 1 08:39:10.946774 sshd[5494]: Connection closed by 10.0.0.1 port 53800 Jul 1 08:39:10.960091 systemd[1]: Started sshd@13-10.0.0.80:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Jul 1 08:39:11.087783 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:11.093806 systemd[1]: sshd@12-10.0.0.80:22-10.0.0.1:53800.service: Deactivated successfully. Jul 1 08:39:11.096914 systemd[1]: session-13.scope: Deactivated successfully. Jul 1 08:39:11.097915 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jul 1 08:39:11.099862 systemd-logind[1560]: Removed session 13. Jul 1 08:39:11.216548 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:11.218501 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:11.224639 systemd-logind[1560]: New session 14 of user core. Jul 1 08:39:11.234226 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 1 08:39:11.278567 containerd[1595]: time="2025-07-01T08:39:11.278500582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" id:\"53f1e8b0e77676ac92c8e1ba36eaaf315937108a48b7cc32e87d6d0530651437\" pid:5522 exited_at:{seconds:1751359151 nanos:278019083}" Jul 1 08:39:11.832854 containerd[1595]: time="2025-07-01T08:39:11.832666623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:11.841337 sshd[5535]: Connection closed by 10.0.0.1 port 53812 Jul 1 08:39:11.842087 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:11.846726 containerd[1595]: time="2025-07-01T08:39:11.846397877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 1 08:39:11.848633 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jul 1 08:39:11.849892 systemd[1]: sshd@13-10.0.0.80:22-10.0.0.1:53812.service: Deactivated successfully. Jul 1 08:39:11.852615 systemd[1]: session-14.scope: Deactivated successfully. Jul 1 08:39:11.854921 systemd-logind[1560]: Removed session 14. Jul 1 08:39:11.857702 containerd[1595]: time="2025-07-01T08:39:11.857592614Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:11.870127 containerd[1595]: time="2025-07-01T08:39:11.870042914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:11.872476 containerd[1595]: time="2025-07-01T08:39:11.871850482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 10.939272032s" Jul 1 08:39:11.872476 containerd[1595]: time="2025-07-01T08:39:11.871939041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 1 08:39:11.875462 containerd[1595]: time="2025-07-01T08:39:11.875124838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 1 08:39:11.877467 containerd[1595]: time="2025-07-01T08:39:11.877393848Z" level=info msg="CreateContainer within sandbox \"97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 1 08:39:12.031591 containerd[1595]: time="2025-07-01T08:39:12.027526198Z" level=info msg="Container bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:12.509033 containerd[1595]: time="2025-07-01T08:39:12.508945939Z" level=info msg="CreateContainer within sandbox \"97277e90497ddd999132218114bf7de519297168ae80523ec1d92b725bb0f86e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\"" Jul 1 08:39:12.509631 containerd[1595]: time="2025-07-01T08:39:12.509607250Z" level=info msg="StartContainer for \"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\"" Jul 1 08:39:12.511027 containerd[1595]: time="2025-07-01T08:39:12.510990267Z" level=info msg="connecting to shim bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71" address="unix:///run/containerd/s/1f7f2abd25399024d69a5b270db131d272335bce5b6bebb0a81c084bef05ec3a" protocol=ttrpc version=3 Jul 1 08:39:12.542021 systemd[1]: Started cri-containerd-bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71.scope - libcontainer container bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71. Jul 1 08:39:12.821007 kubelet[2724]: E0701 08:39:12.820961 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:39:13.271548 containerd[1595]: time="2025-07-01T08:39:13.271184004Z" level=info msg="StartContainer for \"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" returns successfully" Jul 1 08:39:13.515349 containerd[1595]: time="2025-07-01T08:39:13.515269331Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:13.692271 containerd[1595]: time="2025-07-01T08:39:13.691990637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 1 08:39:13.695220 containerd[1595]: time="2025-07-01T08:39:13.695166731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.819971098s" Jul 1 08:39:13.695592 containerd[1595]: time="2025-07-01T08:39:13.695219763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 1 08:39:13.696547 containerd[1595]: time="2025-07-01T08:39:13.696517847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 1 08:39:13.698315 containerd[1595]: time="2025-07-01T08:39:13.698284724Z" level=info msg="CreateContainer within sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 1 08:39:14.054047 containerd[1595]: time="2025-07-01T08:39:14.053795960Z" level=info msg="Container 7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:14.176201 containerd[1595]: time="2025-07-01T08:39:14.176019898Z" level=info msg="CreateContainer within sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\"" Jul 1 08:39:14.177188 containerd[1595]: time="2025-07-01T08:39:14.176983174Z" level=info msg="StartContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\"" Jul 1 08:39:14.178485 containerd[1595]: time="2025-07-01T08:39:14.178433417Z" level=info msg="connecting to shim 7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7" address="unix:///run/containerd/s/fb123c595e0d99604641d183b3aabff81b94cfd9fc159d1abdeee317f3c49d68" protocol=ttrpc version=3 Jul 1 08:39:14.224087 systemd[1]: Started cri-containerd-7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7.scope - libcontainer container 7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7. Jul 1 08:39:14.384689 containerd[1595]: time="2025-07-01T08:39:14.384537475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" id:\"2adf1d0fc3421dc83451199550f2e6a974e64fdf9fd1adebc8c7c86f19ccbdca\" pid:5627 exit_status:1 exited_at:{seconds:1751359154 nanos:384054063}" Jul 1 08:39:14.394550 containerd[1595]: time="2025-07-01T08:39:14.394477955Z" level=info msg="StartContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" returns successfully" Jul 1 08:39:15.302103 containerd[1595]: time="2025-07-01T08:39:15.302040428Z" level=info msg="StopContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" with timeout 30 (s)" Jul 1 08:39:15.323037 containerd[1595]: time="2025-07-01T08:39:15.322988486Z" level=info msg="Stop container \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" with signal terminated" Jul 1 08:39:15.344079 systemd[1]: cri-containerd-7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7.scope: Deactivated successfully. Jul 1 08:39:15.345336 containerd[1595]: time="2025-07-01T08:39:15.345278179Z" level=info msg="received exit event container_id:\"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" id:\"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" pid:5598 exit_status:1 exited_at:{seconds:1751359155 nanos:344947649}" Jul 1 08:39:15.345590 containerd[1595]: time="2025-07-01T08:39:15.345556018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" id:\"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" pid:5598 exit_status:1 exited_at:{seconds:1751359155 nanos:344947649}" Jul 1 08:39:15.377902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7-rootfs.mount: Deactivated successfully. Jul 1 08:39:15.380721 containerd[1595]: time="2025-07-01T08:39:15.380687800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" id:\"9c8aec69e3aae698e455633d6e61616823613455ae5e4ce7288d196f2ffefb7c\" pid:5661 exit_status:1 exited_at:{seconds:1751359155 nanos:380315782}" Jul 1 08:39:15.747595 kubelet[2724]: E0701 08:39:15.747419 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:39:15.975196 kubelet[2724]: I0701 08:39:15.975085 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-fg4hg" podStartSLOduration=47.961028137 podStartE2EDuration="1m7.975062031s" podCreationTimestamp="2025-07-01 08:38:08 +0000 UTC" firstStartedPulling="2025-07-01 08:38:51.860766877 +0000 UTC m=+63.193997658" lastFinishedPulling="2025-07-01 08:39:11.874800761 +0000 UTC m=+83.208031552" observedRunningTime="2025-07-01 08:39:14.401770379 +0000 UTC m=+85.735001150" watchObservedRunningTime="2025-07-01 08:39:15.975062031 +0000 UTC m=+87.308292813" Jul 1 08:39:15.980711 containerd[1595]: time="2025-07-01T08:39:15.980627762Z" level=info msg="StopContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" returns successfully" Jul 1 08:39:16.063852 containerd[1595]: time="2025-07-01T08:39:16.063775963Z" level=info msg="StopPodSandbox for \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\"" Jul 1 08:39:16.077193 containerd[1595]: time="2025-07-01T08:39:16.077114027Z" level=info msg="Container to stop \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:16.086041 systemd[1]: cri-containerd-f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d.scope: Deactivated successfully. Jul 1 08:39:16.092405 containerd[1595]: time="2025-07-01T08:39:16.092365665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" id:\"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" pid:5156 exit_status:137 exited_at:{seconds:1751359156 nanos:91981333}" Jul 1 08:39:16.129232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d-rootfs.mount: Deactivated successfully. Jul 1 08:39:16.491459 containerd[1595]: time="2025-07-01T08:39:16.490295386Z" level=info msg="received exit event sandbox_id:\"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" exit_status:137 exited_at:{seconds:1751359156 nanos:91981333}" Jul 1 08:39:16.492302 containerd[1595]: time="2025-07-01T08:39:16.492045407Z" level=info msg="shim disconnected" id=f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d namespace=k8s.io Jul 1 08:39:16.492302 containerd[1595]: time="2025-07-01T08:39:16.492072248Z" level=warning msg="cleaning up after shim disconnected" id=f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d namespace=k8s.io Jul 1 08:39:16.494341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d-shm.mount: Deactivated successfully. Jul 1 08:39:16.501315 containerd[1595]: time="2025-07-01T08:39:16.492082468Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 1 08:39:16.853486 systemd[1]: Started sshd@14-10.0.0.80:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Jul 1 08:39:17.092867 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:17.094919 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:17.100082 systemd-logind[1560]: New session 15 of user core. Jul 1 08:39:17.111981 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 1 08:39:17.292198 kubelet[2724]: I0701 08:39:17.292157 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:17.731157 sshd[5751]: Connection closed by 10.0.0.1 port 53814 Jul 1 08:39:17.731546 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:17.738016 systemd[1]: sshd@14-10.0.0.80:22-10.0.0.1:53814.service: Deactivated successfully. Jul 1 08:39:17.740164 systemd[1]: session-15.scope: Deactivated successfully. Jul 1 08:39:17.741080 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jul 1 08:39:17.742239 systemd-logind[1560]: Removed session 15. Jul 1 08:39:17.893731 kubelet[2724]: I0701 08:39:17.893230 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785dd9b466-97bdw" podStartSLOduration=53.810433788 podStartE2EDuration="1m13.893208181s" podCreationTimestamp="2025-07-01 08:38:04 +0000 UTC" firstStartedPulling="2025-07-01 08:38:53.613556077 +0000 UTC m=+64.946786858" lastFinishedPulling="2025-07-01 08:39:13.69633047 +0000 UTC m=+85.029561251" observedRunningTime="2025-07-01 08:39:15.975263235 +0000 UTC m=+87.308494016" watchObservedRunningTime="2025-07-01 08:39:17.893208181 +0000 UTC m=+89.226438952" Jul 1 08:39:17.921184 systemd-networkd[1484]: cali4ad64b1ccf1: Link DOWN Jul 1 08:39:17.921195 systemd-networkd[1484]: cali4ad64b1ccf1: Lost carrier Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.893 [INFO][5742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.919 [INFO][5742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" iface="eth0" netns="/var/run/netns/cni-1e7abffe-974a-dfda-2442-ba4f2b2dc0f8" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.920 [INFO][5742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" iface="eth0" netns="/var/run/netns/cni-1e7abffe-974a-dfda-2442-ba4f2b2dc0f8" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.940 [INFO][5742] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" after=20.35519ms iface="eth0" netns="/var/run/netns/cni-1e7abffe-974a-dfda-2442-ba4f2b2dc0f8" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.940 [INFO][5742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.940 [INFO][5742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.963 [INFO][5776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.964 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:17.964 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:19.142 [INFO][5776] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:19.143 [INFO][5776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:19.144 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:19.151593 containerd[1595]: 2025-07-01 08:39:19.147 [INFO][5742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:19.156089 systemd[1]: run-netns-cni\x2d1e7abffe\x2d974a\x2ddfda\x2d2442\x2dba4f2b2dc0f8.mount: Deactivated successfully. Jul 1 08:39:19.157410 containerd[1595]: time="2025-07-01T08:39:19.157365180Z" level=info msg="TearDown network for sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" successfully" Jul 1 08:39:19.157472 containerd[1595]: time="2025-07-01T08:39:19.157411688Z" level=info msg="StopPodSandbox for \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" returns successfully" Jul 1 08:39:19.327141 kubelet[2724]: I0701 08:39:19.327073 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g486f\" (UniqueName: \"kubernetes.io/projected/9c8054ba-10de-47da-9909-9fedeb482d2a-kube-api-access-g486f\") pod \"9c8054ba-10de-47da-9909-9fedeb482d2a\" (UID: \"9c8054ba-10de-47da-9909-9fedeb482d2a\") " Jul 1 08:39:19.327141 kubelet[2724]: I0701 08:39:19.327128 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c8054ba-10de-47da-9909-9fedeb482d2a-calico-apiserver-certs\") pod \"9c8054ba-10de-47da-9909-9fedeb482d2a\" (UID: \"9c8054ba-10de-47da-9909-9fedeb482d2a\") " Jul 1 08:39:19.421331 kubelet[2724]: I0701 08:39:19.421183 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8054ba-10de-47da-9909-9fedeb482d2a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9c8054ba-10de-47da-9909-9fedeb482d2a" (UID: "9c8054ba-10de-47da-9909-9fedeb482d2a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 1 08:39:19.421456 kubelet[2724]: I0701 08:39:19.421373 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8054ba-10de-47da-9909-9fedeb482d2a-kube-api-access-g486f" (OuterVolumeSpecName: "kube-api-access-g486f") pod "9c8054ba-10de-47da-9909-9fedeb482d2a" (UID: "9c8054ba-10de-47da-9909-9fedeb482d2a"). InnerVolumeSpecName "kube-api-access-g486f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 1 08:39:19.423320 systemd[1]: var-lib-kubelet-pods-9c8054ba\x2d10de\x2d47da\x2d9909\x2d9fedeb482d2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg486f.mount: Deactivated successfully. Jul 1 08:39:19.423445 systemd[1]: var-lib-kubelet-pods-9c8054ba\x2d10de\x2d47da\x2d9909\x2d9fedeb482d2a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 1 08:39:19.427711 kubelet[2724]: I0701 08:39:19.427651 2724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g486f\" (UniqueName: \"kubernetes.io/projected/9c8054ba-10de-47da-9909-9fedeb482d2a-kube-api-access-g486f\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:19.427711 kubelet[2724]: I0701 08:39:19.427711 2724 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c8054ba-10de-47da-9909-9fedeb482d2a-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:19.603587 systemd[1]: Removed slice kubepods-besteffort-pod9c8054ba_10de_47da_9909_9fedeb482d2a.slice - libcontainer container kubepods-besteffort-pod9c8054ba_10de_47da_9909_9fedeb482d2a.slice. Jul 1 08:39:20.750456 kubelet[2724]: I0701 08:39:20.749893 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8054ba-10de-47da-9909-9fedeb482d2a" path="/var/lib/kubelet/pods/9c8054ba-10de-47da-9909-9fedeb482d2a/volumes" Jul 1 08:39:21.492228 containerd[1595]: time="2025-07-01T08:39:21.492082862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:21.520421 containerd[1595]: time="2025-07-01T08:39:21.520304583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 1 08:39:21.536722 containerd[1595]: time="2025-07-01T08:39:21.536024040Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:21.587359 containerd[1595]: time="2025-07-01T08:39:21.587280233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:21.588266 containerd[1595]: time="2025-07-01T08:39:21.588222034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.891663489s" Jul 1 08:39:21.588330 containerd[1595]: time="2025-07-01T08:39:21.588272239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 1 08:39:21.589463 containerd[1595]: time="2025-07-01T08:39:21.589433987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 1 08:39:21.603911 containerd[1595]: time="2025-07-01T08:39:21.603861249Z" level=info msg="CreateContainer within sandbox \"3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 1 08:39:21.739859 containerd[1595]: time="2025-07-01T08:39:21.739794157Z" level=info msg="Container 661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:21.775810 containerd[1595]: time="2025-07-01T08:39:21.774789302Z" level=info msg="CreateContainer within sandbox \"3e1bfeaad2de3a3ae0dffb52fb182af334f65527b5b4063cd72c2d2da390b6ea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\"" Jul 1 08:39:21.777348 containerd[1595]: time="2025-07-01T08:39:21.777059427Z" level=info msg="StartContainer for \"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\"" Jul 1 08:39:21.779373 containerd[1595]: time="2025-07-01T08:39:21.779314623Z" level=info msg="connecting to shim 661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7" address="unix:///run/containerd/s/fff60e561d9d37020af05553c50e385d22094e47cb42bd4c15a266748273efe6" protocol=ttrpc version=3 Jul 1 08:39:21.816048 systemd[1]: Started cri-containerd-661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7.scope - libcontainer container 661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7. Jul 1 08:39:21.944639 containerd[1595]: time="2025-07-01T08:39:21.944575815Z" level=info msg="StartContainer for \"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" returns successfully" Jul 1 08:39:22.350845 containerd[1595]: time="2025-07-01T08:39:22.350743315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" id:\"fdd44981d18c95502121f2b5d939126ee25b0d2204cec98a4af1e6f53b581996\" pid:5856 exit_status:1 exited_at:{seconds:1751359162 nanos:350304762}" Jul 1 08:39:22.415108 kubelet[2724]: I0701 08:39:22.414950 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cc74d4c7f-4blk4" podStartSLOduration=47.046303227 podStartE2EDuration="1m14.414926711s" podCreationTimestamp="2025-07-01 08:38:08 +0000 UTC" firstStartedPulling="2025-07-01 08:38:54.220620282 +0000 UTC m=+65.553851063" lastFinishedPulling="2025-07-01 08:39:21.589243766 +0000 UTC m=+92.922474547" observedRunningTime="2025-07-01 08:39:22.414344736 +0000 UTC m=+93.747575517" watchObservedRunningTime="2025-07-01 08:39:22.414926711 +0000 UTC m=+93.748157492" Jul 1 08:39:22.747155 kubelet[2724]: E0701 08:39:22.746938 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:39:22.748161 systemd[1]: Started sshd@15-10.0.0.80:22-10.0.0.1:58830.service - OpenSSH per-connection server daemon (10.0.0.1:58830). Jul 1 08:39:22.824746 sshd[5872]: Accepted publickey for core from 10.0.0.1 port 58830 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:22.827514 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:22.832800 systemd-logind[1560]: New session 16 of user core. Jul 1 08:39:22.840020 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 1 08:39:23.174412 sshd[5875]: Connection closed by 10.0.0.1 port 58830 Jul 1 08:39:23.174864 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:23.179859 systemd[1]: sshd@15-10.0.0.80:22-10.0.0.1:58830.service: Deactivated successfully. Jul 1 08:39:23.182302 systemd[1]: session-16.scope: Deactivated successfully. Jul 1 08:39:23.183483 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jul 1 08:39:23.185216 systemd-logind[1560]: Removed session 16. Jul 1 08:39:23.359316 containerd[1595]: time="2025-07-01T08:39:23.359245397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" id:\"94ce2ef62b4d8dcc08dea64b55d88ac2df97af4078dd9399fb8813052e195dab\" pid:5901 exited_at:{seconds:1751359163 nanos:358838684}" Jul 1 08:39:24.579516 containerd[1595]: time="2025-07-01T08:39:24.579441491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" id:\"f25e533497d965469fbfb623be62fb6bde04682af470b5c23494312fe05d52ad\" pid:5931 exited_at:{seconds:1751359164 nanos:579214892}" Jul 1 08:39:24.855451 containerd[1595]: time="2025-07-01T08:39:24.855267538Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" id:\"f0e711c24ce4754d605ed094cfb4dd4602d05d8016e0d87e870c444e695ea5de\" pid:5942 exited_at:{seconds:1751359164 nanos:854913515}" Jul 1 08:39:26.846792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644975378.mount: Deactivated successfully. Jul 1 08:39:27.374992 containerd[1595]: time="2025-07-01T08:39:27.374887403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:27.375855 containerd[1595]: time="2025-07-01T08:39:27.375804163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 1 08:39:27.377662 containerd[1595]: time="2025-07-01T08:39:27.377595201Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:27.385580 containerd[1595]: time="2025-07-01T08:39:27.385534265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 1 08:39:27.386301 containerd[1595]: time="2025-07-01T08:39:27.386256135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.796789997s" Jul 1 08:39:27.386301 containerd[1595]: time="2025-07-01T08:39:27.386286923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 1 08:39:27.388430 containerd[1595]: time="2025-07-01T08:39:27.388387309Z" level=info msg="CreateContainer within sandbox \"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 1 08:39:27.419270 containerd[1595]: time="2025-07-01T08:39:27.419203742Z" level=info msg="Container 1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04: CDI devices from CRI Config.CDIDevices: []" Jul 1 08:39:27.430380 containerd[1595]: time="2025-07-01T08:39:27.430329502Z" level=info msg="CreateContainer within sandbox \"590bcdc8a566ff9fe05387d9e53bfd00e8dc74084147fa6d06e222feeb345be4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04\"" Jul 1 08:39:27.430972 containerd[1595]: time="2025-07-01T08:39:27.430934771Z" level=info msg="StartContainer for \"1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04\"" Jul 1 08:39:27.432174 containerd[1595]: time="2025-07-01T08:39:27.432147873Z" level=info msg="connecting to shim 1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04" address="unix:///run/containerd/s/1f9793803315eb0422aeebebd425dadb36907b454729483e1eb8381e4fa5e6b9" protocol=ttrpc version=3 Jul 1 08:39:27.468839 systemd[1]: Started cri-containerd-1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04.scope - libcontainer container 1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04. Jul 1 08:39:27.526744 containerd[1595]: time="2025-07-01T08:39:27.526694653Z" level=info msg="StartContainer for \"1d18d1a9f4b488365dade4709ecc785768d4290451564adfc7e53a270d3d7e04\" returns successfully" Jul 1 08:39:28.189049 systemd[1]: Started sshd@16-10.0.0.80:22-10.0.0.1:41610.service - OpenSSH per-connection server daemon (10.0.0.1:41610). Jul 1 08:39:28.292550 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 41610 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:28.295076 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:28.301178 systemd-logind[1560]: New session 17 of user core. Jul 1 08:39:28.320650 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 1 08:39:28.511414 kubelet[2724]: I0701 08:39:28.511202 2724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f7846d754-zvgtw" podStartSLOduration=3.889380319 podStartE2EDuration="48.511181062s" podCreationTimestamp="2025-07-01 08:38:40 +0000 UTC" firstStartedPulling="2025-07-01 08:38:42.765291569 +0000 UTC m=+54.098522350" lastFinishedPulling="2025-07-01 08:39:27.387092311 +0000 UTC m=+98.720323093" observedRunningTime="2025-07-01 08:39:28.510366256 +0000 UTC m=+99.843597027" watchObservedRunningTime="2025-07-01 08:39:28.511181062 +0000 UTC m=+99.844411843" Jul 1 08:39:28.609630 sshd[6008]: Connection closed by 10.0.0.1 port 41610 Jul 1 08:39:28.610050 sshd-session[6005]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:28.615271 systemd[1]: sshd@16-10.0.0.80:22-10.0.0.1:41610.service: Deactivated successfully. Jul 1 08:39:28.617621 systemd[1]: session-17.scope: Deactivated successfully. Jul 1 08:39:28.618600 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jul 1 08:39:28.620346 systemd-logind[1560]: Removed session 17. Jul 1 08:39:33.040736 update_engine[1564]: I20250701 08:39:33.040521 1564 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 1 08:39:33.040736 update_engine[1564]: I20250701 08:39:33.040721 1564 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 1 08:39:33.045576 update_engine[1564]: I20250701 08:39:33.041078 1564 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 1 08:39:33.142916 update_engine[1564]: I20250701 08:39:33.142854 1564 omaha_request_params.cc:62] Current group set to developer Jul 1 08:39:33.143152 update_engine[1564]: I20250701 08:39:33.143074 1564 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 1 08:39:33.143152 update_engine[1564]: I20250701 08:39:33.143095 1564 update_attempter.cc:643] Scheduling an action processor start. Jul 1 08:39:33.143152 update_engine[1564]: I20250701 08:39:33.143121 1564 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 1 08:39:33.143391 update_engine[1564]: I20250701 08:39:33.143193 1564 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 1 08:39:33.143391 update_engine[1564]: I20250701 08:39:33.143277 1564 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 1 08:39:33.143391 update_engine[1564]: I20250701 08:39:33.143289 1564 omaha_request_action.cc:272] Request: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: Jul 1 08:39:33.143391 update_engine[1564]: I20250701 08:39:33.143298 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:39:33.152308 update_engine[1564]: I20250701 08:39:33.152207 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:39:33.153456 update_engine[1564]: I20250701 08:39:33.152721 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:39:33.154930 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 1 08:39:33.160027 update_engine[1564]: E20250701 08:39:33.159945 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:39:33.160174 update_engine[1564]: I20250701 08:39:33.160087 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 1 08:39:33.625529 systemd[1]: Started sshd@17-10.0.0.80:22-10.0.0.1:41612.service - OpenSSH per-connection server daemon (10.0.0.1:41612). Jul 1 08:39:33.693942 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 41612 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:33.696059 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:33.701520 systemd-logind[1560]: New session 18 of user core. Jul 1 08:39:33.712043 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 1 08:39:33.870551 sshd[6027]: Connection closed by 10.0.0.1 port 41612 Jul 1 08:39:33.870922 sshd-session[6024]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:33.875287 systemd[1]: sshd@17-10.0.0.80:22-10.0.0.1:41612.service: Deactivated successfully. Jul 1 08:39:33.877371 systemd[1]: session-18.scope: Deactivated successfully. Jul 1 08:39:33.878268 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jul 1 08:39:33.879653 systemd-logind[1560]: Removed session 18. Jul 1 08:39:38.888196 systemd[1]: Started sshd@18-10.0.0.80:22-10.0.0.1:58288.service - OpenSSH per-connection server daemon (10.0.0.1:58288). Jul 1 08:39:38.959575 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 58288 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:38.961594 sshd-session[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:38.967218 systemd-logind[1560]: New session 19 of user core. Jul 1 08:39:38.976952 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 1 08:39:39.322574 sshd[6046]: Connection closed by 10.0.0.1 port 58288 Jul 1 08:39:39.323325 sshd-session[6043]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:39.333056 systemd[1]: sshd@18-10.0.0.80:22-10.0.0.1:58288.service: Deactivated successfully. Jul 1 08:39:39.335305 systemd[1]: session-19.scope: Deactivated successfully. Jul 1 08:39:39.336173 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jul 1 08:39:39.339547 systemd[1]: Started sshd@19-10.0.0.80:22-10.0.0.1:58302.service - OpenSSH per-connection server daemon (10.0.0.1:58302). Jul 1 08:39:39.341450 systemd-logind[1560]: Removed session 19. Jul 1 08:39:39.403745 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 58302 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:39.406236 sshd-session[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:39.412223 systemd-logind[1560]: New session 20 of user core. Jul 1 08:39:39.419921 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 1 08:39:40.279749 containerd[1595]: time="2025-07-01T08:39:40.279700411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" id:\"dc06f6c9a463423b761ba4b3955139cc51cf7cafd8e3bfe65b298d76f21e2f9b\" pid:6080 exited_at:{seconds:1751359180 nanos:279177321}" Jul 1 08:39:40.429312 sshd[6063]: Connection closed by 10.0.0.1 port 58302 Jul 1 08:39:40.430121 sshd-session[6060]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:40.440659 systemd[1]: sshd@19-10.0.0.80:22-10.0.0.1:58302.service: Deactivated successfully. Jul 1 08:39:40.443132 systemd[1]: session-20.scope: Deactivated successfully. Jul 1 08:39:40.444025 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jul 1 08:39:40.448523 systemd[1]: Started sshd@20-10.0.0.80:22-10.0.0.1:58314.service - OpenSSH per-connection server daemon (10.0.0.1:58314). Jul 1 08:39:40.450419 systemd-logind[1560]: Removed session 20. Jul 1 08:39:40.510968 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 58314 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:40.513093 sshd-session[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:40.518664 systemd-logind[1560]: New session 21 of user core. Jul 1 08:39:40.525888 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 1 08:39:41.229960 containerd[1595]: time="2025-07-01T08:39:41.229889496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8df2b53a63425f1e08b4d286b920cdd64c0dfd384ff7cd82453a0eac09d9f07f\" id:\"6bbceecac56ed7ee5d39d05437e8a5bc572536ccaee84ae806b99360563b8915\" pid:6122 exited_at:{seconds:1751359181 nanos:229571134}" Jul 1 08:39:43.013332 update_engine[1564]: I20250701 08:39:43.013221 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:39:43.013907 update_engine[1564]: I20250701 08:39:43.013598 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:39:43.015471 update_engine[1564]: I20250701 08:39:43.015417 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:39:43.023707 update_engine[1564]: E20250701 08:39:43.023579 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:39:43.023707 update_engine[1564]: I20250701 08:39:43.023702 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 1 08:39:43.402807 sshd[6101]: Connection closed by 10.0.0.1 port 58314 Jul 1 08:39:43.404092 sshd-session[6098]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:43.417382 containerd[1595]: time="2025-07-01T08:39:43.415966705Z" level=info msg="StopContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" with timeout 30 (s)" Jul 1 08:39:43.418884 systemd[1]: sshd@20-10.0.0.80:22-10.0.0.1:58314.service: Deactivated successfully. Jul 1 08:39:43.423468 systemd[1]: session-21.scope: Deactivated successfully. Jul 1 08:39:43.424122 systemd[1]: session-21.scope: Consumed 773ms CPU time, 76.9M memory peak. Jul 1 08:39:43.427268 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jul 1 08:39:43.432953 systemd[1]: Started sshd@21-10.0.0.80:22-10.0.0.1:58328.service - OpenSSH per-connection server daemon (10.0.0.1:58328). Jul 1 08:39:43.441655 systemd-logind[1560]: Removed session 21. Jul 1 08:39:43.445240 containerd[1595]: time="2025-07-01T08:39:43.445187031Z" level=info msg="Stop container \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" with signal terminated" Jul 1 08:39:43.475685 systemd[1]: cri-containerd-5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c.scope: Deactivated successfully. Jul 1 08:39:43.476132 systemd[1]: cri-containerd-5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c.scope: Consumed 1.077s CPU time, 56.3M memory peak, 2.2M read from disk. Jul 1 08:39:43.501281 containerd[1595]: time="2025-07-01T08:39:43.501226511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" id:\"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" pid:5411 exit_status:1 exited_at:{seconds:1751359183 nanos:499312841}" Jul 1 08:39:43.501865 containerd[1595]: time="2025-07-01T08:39:43.501370804Z" level=info msg="received exit event container_id:\"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" id:\"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" pid:5411 exit_status:1 exited_at:{seconds:1751359183 nanos:499312841}" Jul 1 08:39:43.541124 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 58328 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:43.544507 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:43.555144 systemd-logind[1560]: New session 22 of user core. Jul 1 08:39:43.563997 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 1 08:39:43.578242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c-rootfs.mount: Deactivated successfully. Jul 1 08:39:44.366761 containerd[1595]: time="2025-07-01T08:39:44.366655994Z" level=info msg="StopContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" returns successfully" Jul 1 08:39:44.369193 containerd[1595]: time="2025-07-01T08:39:44.369144109Z" level=info msg="StopPodSandbox for \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\"" Jul 1 08:39:44.393987 containerd[1595]: time="2025-07-01T08:39:44.393881549Z" level=info msg="Container to stop \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 1 08:39:44.452715 systemd[1]: cri-containerd-6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7.scope: Deactivated successfully. Jul 1 08:39:44.469529 containerd[1595]: time="2025-07-01T08:39:44.469480610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" id:\"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" pid:4841 exit_status:137 exited_at:{seconds:1751359184 nanos:468207223}" Jul 1 08:39:44.561970 containerd[1595]: time="2025-07-01T08:39:44.559736259Z" level=info msg="shim disconnected" id=6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7 namespace=k8s.io Jul 1 08:39:44.561970 containerd[1595]: time="2025-07-01T08:39:44.559812172Z" level=warning msg="cleaning up after shim disconnected" id=6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7 namespace=k8s.io Jul 1 08:39:44.561598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7-rootfs.mount: Deactivated successfully. Jul 1 08:39:44.570194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7-shm.mount: Deactivated successfully. Jul 1 08:39:44.634817 containerd[1595]: time="2025-07-01T08:39:44.559824716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 1 08:39:44.634817 containerd[1595]: time="2025-07-01T08:39:44.559772998Z" level=info msg="received exit event sandbox_id:\"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" exit_status:137 exited_at:{seconds:1751359184 nanos:468207223}" Jul 1 08:39:45.342261 systemd-networkd[1484]: cali47819b53430: Link DOWN Jul 1 08:39:45.342273 systemd-networkd[1484]: cali47819b53430: Lost carrier Jul 1 08:39:45.393141 kubelet[2724]: I0701 08:39:45.393041 2724 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:45.407106 sshd[6187]: Connection closed by 10.0.0.1 port 58328 Jul 1 08:39:45.406007 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:45.419955 systemd[1]: sshd@21-10.0.0.80:22-10.0.0.1:58328.service: Deactivated successfully. Jul 1 08:39:45.424423 systemd[1]: session-22.scope: Deactivated successfully. Jul 1 08:39:45.425502 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jul 1 08:39:45.433037 systemd[1]: Started sshd@22-10.0.0.80:22-10.0.0.1:58340.service - OpenSSH per-connection server daemon (10.0.0.1:58340). Jul 1 08:39:45.435026 systemd-logind[1560]: Removed session 22. Jul 1 08:39:45.504824 sshd[6268]: Accepted publickey for core from 10.0.0.1 port 58340 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:45.507102 sshd-session[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:45.516466 systemd-logind[1560]: New session 23 of user core. Jul 1 08:39:45.525717 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.338 [INFO][6247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.340 [INFO][6247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" iface="eth0" netns="/var/run/netns/cni-a8477249-bff0-91fd-9b46-68994411245f" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.340 [INFO][6247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" iface="eth0" netns="/var/run/netns/cni-a8477249-bff0-91fd-9b46-68994411245f" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.353 [INFO][6247] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" after=12.572912ms iface="eth0" netns="/var/run/netns/cni-a8477249-bff0-91fd-9b46-68994411245f" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.353 [INFO][6247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.353 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.606 [INFO][6261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.606 [INFO][6261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.606 [INFO][6261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.668 [INFO][6261] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.669 [INFO][6261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.671 [INFO][6261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:45.683852 containerd[1595]: 2025-07-01 08:39:45.678 [INFO][6247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:45.699341 sshd[6275]: Connection closed by 10.0.0.1 port 58340 Jul 1 08:39:45.699035 sshd-session[6268]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:45.705480 containerd[1595]: time="2025-07-01T08:39:45.705280111Z" level=info msg="TearDown network for sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" successfully" Jul 1 08:39:45.705480 containerd[1595]: time="2025-07-01T08:39:45.705339664Z" level=info msg="StopPodSandbox for \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" returns successfully" Jul 1 08:39:45.707087 systemd[1]: run-netns-cni\x2da8477249\x2dbff0\x2d91fd\x2d9b46\x2d68994411245f.mount: Deactivated successfully. Jul 1 08:39:45.709535 systemd[1]: sshd@22-10.0.0.80:22-10.0.0.1:58340.service: Deactivated successfully. Jul 1 08:39:45.715009 systemd[1]: session-23.scope: Deactivated successfully. Jul 1 08:39:45.720016 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jul 1 08:39:45.721909 systemd-logind[1560]: Removed session 23. Jul 1 08:39:45.944831 kubelet[2724]: I0701 08:39:45.944473 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/956b37b9-1ba9-40e9-be7f-b28196b02c8c-calico-apiserver-certs\") pod \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\" (UID: \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\") " Jul 1 08:39:45.944831 kubelet[2724]: I0701 08:39:45.944574 2724 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crpm\" (UniqueName: \"kubernetes.io/projected/956b37b9-1ba9-40e9-be7f-b28196b02c8c-kube-api-access-9crpm\") pod \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\" (UID: \"956b37b9-1ba9-40e9-be7f-b28196b02c8c\") " Jul 1 08:39:45.958509 systemd[1]: var-lib-kubelet-pods-956b37b9\x2d1ba9\x2d40e9\x2dbe7f\x2db28196b02c8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9crpm.mount: Deactivated successfully. Jul 1 08:39:45.961411 kubelet[2724]: I0701 08:39:45.961298 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/956b37b9-1ba9-40e9-be7f-b28196b02c8c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "956b37b9-1ba9-40e9-be7f-b28196b02c8c" (UID: "956b37b9-1ba9-40e9-be7f-b28196b02c8c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 1 08:39:45.961734 kubelet[2724]: I0701 08:39:45.961660 2724 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956b37b9-1ba9-40e9-be7f-b28196b02c8c-kube-api-access-9crpm" (OuterVolumeSpecName: "kube-api-access-9crpm") pod "956b37b9-1ba9-40e9-be7f-b28196b02c8c" (UID: "956b37b9-1ba9-40e9-be7f-b28196b02c8c"). InnerVolumeSpecName "kube-api-access-9crpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 1 08:39:45.966346 systemd[1]: var-lib-kubelet-pods-956b37b9\x2d1ba9\x2d40e9\x2dbe7f\x2db28196b02c8c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 1 08:39:46.045786 kubelet[2724]: I0701 08:39:46.045661 2724 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crpm\" (UniqueName: \"kubernetes.io/projected/956b37b9-1ba9-40e9-be7f-b28196b02c8c-kube-api-access-9crpm\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:46.045786 kubelet[2724]: I0701 08:39:46.045737 2724 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/956b37b9-1ba9-40e9-be7f-b28196b02c8c-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 1 08:39:46.406834 systemd[1]: Removed slice kubepods-besteffort-pod956b37b9_1ba9_40e9_be7f_b28196b02c8c.slice - libcontainer container kubepods-besteffort-pod956b37b9_1ba9_40e9_be7f_b28196b02c8c.slice. Jul 1 08:39:46.407727 systemd[1]: kubepods-besteffort-pod956b37b9_1ba9_40e9_be7f_b28196b02c8c.slice: Consumed 1.113s CPU time, 56.5M memory peak, 2.2M read from disk. Jul 1 08:39:46.750261 kubelet[2724]: I0701 08:39:46.749785 2724 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956b37b9-1ba9-40e9-be7f-b28196b02c8c" path="/var/lib/kubelet/pods/956b37b9-1ba9-40e9-be7f-b28196b02c8c/volumes" Jul 1 08:39:48.751204 kubelet[2724]: I0701 08:39:48.751152 2724 scope.go:117] "RemoveContainer" containerID="5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c" Jul 1 08:39:48.853668 containerd[1595]: time="2025-07-01T08:39:48.853608460Z" level=info msg="RemoveContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\"" Jul 1 08:39:48.890045 containerd[1595]: time="2025-07-01T08:39:48.889981615Z" level=info msg="RemoveContainer for \"5962666e92882386500b4f151225fdc25d5a3a945ae36a8e99a485dff7b0cf3c\" returns successfully" Jul 1 08:39:48.890714 kubelet[2724]: I0701 08:39:48.890656 2724 scope.go:117] "RemoveContainer" containerID="7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7" Jul 1 08:39:48.893034 containerd[1595]: time="2025-07-01T08:39:48.892983188Z" level=info msg="RemoveContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\"" Jul 1 08:39:48.905292 containerd[1595]: time="2025-07-01T08:39:48.905126458Z" level=info msg="RemoveContainer for \"7c05aa63b88df200562fb641909eb57496d286d16942d69b0737ecdbfb54ade7\" returns successfully" Jul 1 08:39:48.907779 containerd[1595]: time="2025-07-01T08:39:48.907713758Z" level=info msg="StopPodSandbox for \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\"" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:48.965 [WARNING][6306] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:48.965 [INFO][6306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:48.965 [INFO][6306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" iface="eth0" netns="" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:48.965 [INFO][6306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:48.965 [INFO][6306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.008 [INFO][6315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.008 [INFO][6315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.009 [INFO][6315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.016 [WARNING][6315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.017 [INFO][6315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.019 [INFO][6315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:49.027327 containerd[1595]: 2025-07-01 08:39:49.023 [INFO][6306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.027829 containerd[1595]: time="2025-07-01T08:39:49.027783080Z" level=info msg="TearDown network for sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" successfully" Jul 1 08:39:49.027829 containerd[1595]: time="2025-07-01T08:39:49.027815051Z" level=info msg="StopPodSandbox for \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" returns successfully" Jul 1 08:39:49.028930 containerd[1595]: time="2025-07-01T08:39:49.028622266Z" level=info msg="RemovePodSandbox for \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\"" Jul 1 08:39:49.028930 containerd[1595]: time="2025-07-01T08:39:49.028661430Z" level=info msg="Forcibly stopping sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\"" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.074 [WARNING][6331] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.074 [INFO][6331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.074 [INFO][6331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" iface="eth0" netns="" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.074 [INFO][6331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.074 [INFO][6331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.104 [INFO][6340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.104 [INFO][6340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.104 [INFO][6340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.111 [WARNING][6340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.111 [INFO][6340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" HandleID="k8s-pod-network.6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Workload="localhost-k8s-calico--apiserver--785dd9b466--gfqj5-eth0" Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.112 [INFO][6340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:49.126557 containerd[1595]: 2025-07-01 08:39:49.118 [INFO][6331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7" Jul 1 08:39:49.126557 containerd[1595]: time="2025-07-01T08:39:49.125134859Z" level=info msg="TearDown network for sandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" successfully" Jul 1 08:39:49.147831 containerd[1595]: time="2025-07-01T08:39:49.147757219Z" level=info msg="Ensure that sandbox 6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7 in task-service has been cleanup successfully" Jul 1 08:39:49.154034 containerd[1595]: time="2025-07-01T08:39:49.153963217Z" level=info msg="RemovePodSandbox \"6262c3cce28bbb4b318f4dcaeb371ee9c9e18eb7d48b122de6914e894cf99ec7\" returns successfully" Jul 1 08:39:49.154864 containerd[1595]: time="2025-07-01T08:39:49.154830445Z" level=info msg="StopPodSandbox for \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\"" Jul 1 08:39:49.272781 containerd[1595]: time="2025-07-01T08:39:49.272536950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" id:\"8650f2fb39b1120b75607fbee9b5d420297658e20386474fc183b5d1fdc54daa\" pid:6376 exited_at:{seconds:1751359189 nanos:272127376}" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.238 [WARNING][6358] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.239 [INFO][6358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.239 [INFO][6358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" iface="eth0" netns="" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.239 [INFO][6358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.239 [INFO][6358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.265 [INFO][6384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.265 [INFO][6384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.265 [INFO][6384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.273 [WARNING][6384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.273 [INFO][6384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.276 [INFO][6384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:49.287463 containerd[1595]: 2025-07-01 08:39:49.283 [INFO][6358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.287463 containerd[1595]: time="2025-07-01T08:39:49.286899781Z" level=info msg="TearDown network for sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" successfully" Jul 1 08:39:49.287463 containerd[1595]: time="2025-07-01T08:39:49.286939616Z" level=info msg="StopPodSandbox for \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" returns successfully" Jul 1 08:39:49.288098 containerd[1595]: time="2025-07-01T08:39:49.287898628Z" level=info msg="RemovePodSandbox for \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\"" Jul 1 08:39:49.288098 containerd[1595]: time="2025-07-01T08:39:49.287952420Z" level=info msg="Forcibly stopping sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\"" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.327 [WARNING][6407] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" WorkloadEndpoint="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.327 [INFO][6407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.327 [INFO][6407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" iface="eth0" netns="" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.327 [INFO][6407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.327 [INFO][6407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.356 [INFO][6416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.356 [INFO][6416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.357 [INFO][6416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.364 [WARNING][6416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.365 [INFO][6416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" HandleID="k8s-pod-network.f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Workload="localhost-k8s-calico--apiserver--785dd9b466--97bdw-eth0" Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.366 [INFO][6416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 1 08:39:49.374543 containerd[1595]: 2025-07-01 08:39:49.369 [INFO][6407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d" Jul 1 08:39:49.375370 containerd[1595]: time="2025-07-01T08:39:49.374589842Z" level=info msg="TearDown network for sandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" successfully" Jul 1 08:39:49.377964 containerd[1595]: time="2025-07-01T08:39:49.377895830Z" level=info msg="Ensure that sandbox f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d in task-service has been cleanup successfully" Jul 1 08:39:49.530204 containerd[1595]: time="2025-07-01T08:39:49.530147093Z" level=info msg="RemovePodSandbox \"f51e9e579cf3fe749c88af9e0f71568f2d1e1c023f6f9446424bc6ca4ec1de3d\" returns successfully" Jul 1 08:39:50.717039 systemd[1]: Started sshd@23-10.0.0.80:22-10.0.0.1:60026.service - OpenSSH per-connection server daemon (10.0.0.1:60026). Jul 1 08:39:50.777795 sshd[6426]: Accepted publickey for core from 10.0.0.1 port 60026 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:50.781444 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:50.790966 systemd-logind[1560]: New session 24 of user core. Jul 1 08:39:50.794014 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 1 08:39:51.214904 sshd[6429]: Connection closed by 10.0.0.1 port 60026 Jul 1 08:39:51.217000 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:51.222595 systemd[1]: sshd@23-10.0.0.80:22-10.0.0.1:60026.service: Deactivated successfully. Jul 1 08:39:51.226657 systemd[1]: session-24.scope: Deactivated successfully. Jul 1 08:39:51.231600 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Jul 1 08:39:51.234708 systemd-logind[1560]: Removed session 24. Jul 1 08:39:53.012981 update_engine[1564]: I20250701 08:39:53.012856 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:39:53.013643 update_engine[1564]: I20250701 08:39:53.013249 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:39:53.013643 update_engine[1564]: I20250701 08:39:53.013623 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:39:53.022719 update_engine[1564]: E20250701 08:39:53.022583 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:39:53.022719 update_engine[1564]: I20250701 08:39:53.022706 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 1 08:39:54.605508 containerd[1595]: time="2025-07-01T08:39:54.605438432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"661d633f7432ad3591677654c7bd63482a2a0c256e97261b140c28ec2a5c60b7\" id:\"958d0ad3c75d3339403be08a06f22097aba09dc7cf1863256bc3eb12bbc5af67\" pid:6466 exited_at:{seconds:1751359194 nanos:604267901}" Jul 1 08:39:54.667165 containerd[1595]: time="2025-07-01T08:39:54.667087493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdf97f33dfcc2902087a9ccb6c53140a9b99e4ec852ff22134e612bc5996ae71\" id:\"81ab05934a70518e2627a65a06d316f732ad7ebeb6fc5fda3dad7e740dc54161\" pid:6476 exited_at:{seconds:1751359194 nanos:666622494}" Jul 1 08:39:56.228280 systemd[1]: Started sshd@24-10.0.0.80:22-10.0.0.1:60032.service - OpenSSH per-connection server daemon (10.0.0.1:60032). Jul 1 08:39:56.312331 sshd[6494]: Accepted publickey for core from 10.0.0.1 port 60032 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:39:56.314996 sshd-session[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:39:56.320173 systemd-logind[1560]: New session 25 of user core. Jul 1 08:39:56.331928 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 1 08:39:56.500389 sshd[6497]: Connection closed by 10.0.0.1 port 60032 Jul 1 08:39:56.501056 sshd-session[6494]: pam_unix(sshd:session): session closed for user core Jul 1 08:39:56.507909 systemd[1]: sshd@24-10.0.0.80:22-10.0.0.1:60032.service: Deactivated successfully. Jul 1 08:39:56.512430 systemd[1]: session-25.scope: Deactivated successfully. Jul 1 08:39:56.514545 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Jul 1 08:39:56.516584 systemd-logind[1560]: Removed session 25. Jul 1 08:39:58.753665 kubelet[2724]: E0701 08:39:58.753605 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:40:01.518940 systemd[1]: Started sshd@25-10.0.0.80:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Jul 1 08:40:01.579107 sshd[6512]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:40:01.580756 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:01.585632 systemd-logind[1560]: New session 26 of user core. Jul 1 08:40:01.591971 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 1 08:40:01.720017 sshd[6515]: Connection closed by 10.0.0.1 port 41450 Jul 1 08:40:01.720391 sshd-session[6512]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:01.725056 systemd[1]: sshd@25-10.0.0.80:22-10.0.0.1:41450.service: Deactivated successfully. Jul 1 08:40:01.727426 systemd[1]: session-26.scope: Deactivated successfully. Jul 1 08:40:01.728452 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Jul 1 08:40:01.730357 systemd-logind[1560]: Removed session 26. Jul 1 08:40:03.012138 update_engine[1564]: I20250701 08:40:03.011998 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:40:03.012575 update_engine[1564]: I20250701 08:40:03.012387 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:40:03.012803 update_engine[1564]: I20250701 08:40:03.012765 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:40:03.021693 update_engine[1564]: E20250701 08:40:03.021620 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:40:03.021777 update_engine[1564]: I20250701 08:40:03.021723 1564 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 1 08:40:03.021777 update_engine[1564]: I20250701 08:40:03.021737 1564 omaha_request_action.cc:617] Omaha request response: Jul 1 08:40:03.022024 update_engine[1564]: E20250701 08:40:03.021981 1564 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023135 1564 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023162 1564 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023170 1564 update_attempter.cc:306] Processing Done. Jul 1 08:40:03.028801 update_engine[1564]: E20250701 08:40:03.023192 1564 update_attempter.cc:619] Update failed. Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023201 1564 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023209 1564 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023217 1564 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023304 1564 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023336 1564 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023345 1564 omaha_request_action.cc:272] Request: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023355 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 1 08:40:03.028801 update_engine[1564]: I20250701 08:40:03.023566 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 1 08:40:03.030043 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 1 08:40:03.030451 update_engine[1564]: I20250701 08:40:03.023838 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 1 08:40:03.031919 update_engine[1564]: E20250701 08:40:03.031858 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031911 1564 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031929 1564 omaha_request_action.cc:617] Omaha request response: Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031937 1564 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031945 1564 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031952 1564 update_attempter.cc:306] Processing Done. Jul 1 08:40:03.031973 update_engine[1564]: I20250701 08:40:03.031960 1564 update_attempter.cc:310] Error event sent. Jul 1 08:40:03.032153 update_engine[1564]: I20250701 08:40:03.031972 1564 update_check_scheduler.cc:74] Next update check in 42m45s Jul 1 08:40:03.032420 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 1 08:40:03.747276 kubelet[2724]: E0701 08:40:03.747184 2724 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 1 08:40:06.744156 systemd[1]: Started sshd@26-10.0.0.80:22-10.0.0.1:41454.service - OpenSSH per-connection server daemon (10.0.0.1:41454). Jul 1 08:40:06.836659 sshd[6537]: Accepted publickey for core from 10.0.0.1 port 41454 ssh2: RSA SHA256:XQjRbOFuvQ+dXndg2ZC3zFS5aA75tgsyQv+SWXAK9tg Jul 1 08:40:06.839049 sshd-session[6537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 1 08:40:06.848209 systemd-logind[1560]: New session 27 of user core. Jul 1 08:40:06.854055 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 1 08:40:07.034789 sshd[6541]: Connection closed by 10.0.0.1 port 41454 Jul 1 08:40:07.035586 sshd-session[6537]: pam_unix(sshd:session): session closed for user core Jul 1 08:40:07.044415 systemd[1]: sshd@26-10.0.0.80:22-10.0.0.1:41454.service: Deactivated successfully. Jul 1 08:40:07.049592 systemd[1]: session-27.scope: Deactivated successfully. Jul 1 08:40:07.054301 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Jul 1 08:40:07.058973 systemd-logind[1560]: Removed session 27.