Sep 9 00:29:57.931709 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:29:57.931753 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:29:57.931768 kernel: BIOS-provided physical RAM map: Sep 9 00:29:57.931777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:29:57.931786 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:29:57.931795 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:29:57.931806 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:29:57.931815 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:29:57.931831 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:29:57.931840 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:29:57.931849 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:29:57.931858 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:29:57.931867 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:29:57.931876 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:29:57.931890 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:29:57.931900 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:29:57.931917 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:29:57.931927 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:29:57.931936 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:29:57.931946 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:29:57.931955 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:29:57.931965 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:29:57.931974 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:29:57.931984 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:29:57.931993 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:29:57.932006 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:29:57.932015 kernel: NX (Execute Disable) protection: active Sep 9 00:29:57.932025 kernel: APIC: Static calls initialized Sep 9 00:29:57.932034 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:29:57.932044 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:29:57.932053 kernel: extended physical RAM map: Sep 9 00:29:57.932063 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:29:57.932072 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:29:57.932082 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:29:57.932091 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:29:57.932109 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:29:57.932122 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:29:57.932132 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:29:57.932141 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:29:57.932184 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:29:57.932201 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:29:57.932211 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:29:57.932225 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:29:57.932235 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:29:57.932245 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:29:57.932255 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:29:57.932266 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:29:57.932276 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:29:57.932286 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:29:57.932296 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:29:57.932306 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:29:57.932319 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:29:57.932329 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:29:57.932339 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:29:57.932349 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:29:57.932359 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:29:57.932369 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:29:57.932378 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:29:57.932394 kernel: efi: EFI v2.7 by EDK II Sep 9 00:29:57.932405 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:29:57.932415 kernel: random: crng init done Sep 9 00:29:57.932428 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:29:57.932438 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:29:57.932455 kernel: secureboot: Secure boot disabled Sep 9 00:29:57.932465 kernel: SMBIOS 2.8 present. Sep 9 00:29:57.932475 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:29:57.932485 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:29:57.932495 kernel: Hypervisor detected: KVM Sep 9 00:29:57.932507 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:29:57.932519 kernel: kvm-clock: using sched offset of 4743828927 cycles Sep 9 00:29:57.932533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:29:57.932550 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:29:57.932564 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:29:57.932580 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:29:57.932593 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:29:57.932606 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:29:57.932619 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:29:57.932635 kernel: Using GB pages for direct mapping Sep 9 00:29:57.932649 kernel: ACPI: Early table checksum verification disabled Sep 9 00:29:57.932662 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:29:57.932675 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:29:57.932688 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932704 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932715 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:29:57.932725 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932736 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932746 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932757 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:29:57.932767 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:29:57.932778 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:29:57.932788 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:29:57.932802 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:29:57.932812 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:29:57.932822 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:29:57.932833 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:29:57.932843 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:29:57.932853 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:29:57.932863 kernel: No NUMA configuration found Sep 9 00:29:57.932874 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:29:57.932884 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:29:57.932897 kernel: Zone ranges: Sep 9 00:29:57.932908 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:29:57.932919 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:29:57.932929 kernel: Normal empty Sep 9 00:29:57.932952 kernel: Device empty Sep 9 00:29:57.932964 kernel: Movable zone start for each node Sep 9 00:29:57.932974 kernel: Early memory node ranges Sep 9 00:29:57.932984 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:29:57.932995 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:29:57.933008 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:29:57.933023 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:29:57.933033 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:29:57.933044 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:29:57.933054 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:29:57.933064 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:29:57.933075 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:29:57.933089 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:29:57.933120 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:29:57.933145 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:29:57.933181 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:29:57.933191 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:29:57.933203 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:29:57.933217 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:29:57.933228 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:29:57.933239 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:29:57.933250 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:29:57.933261 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:29:57.933275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:29:57.933286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:29:57.933297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:29:57.933312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:29:57.933322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:29:57.933333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:29:57.933344 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:29:57.933361 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:29:57.933372 kernel: TSC deadline timer available Sep 9 00:29:57.933390 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:29:57.933400 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:29:57.933411 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:29:57.933422 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:29:57.933432 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:29:57.933443 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:29:57.933454 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:29:57.933465 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:29:57.933476 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:29:57.933490 kernel: kvm-guest: setup PV sched yield Sep 9 00:29:57.933501 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:29:57.933512 kernel: Booting paravirtualized kernel on KVM Sep 9 00:29:57.933523 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:29:57.933535 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:29:57.933546 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:29:57.933557 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:29:57.933568 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:29:57.933579 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:29:57.933593 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:29:57.933619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:29:57.933644 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:29:57.933656 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:29:57.933667 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:29:57.933678 kernel: Fallback order for Node 0: 0 Sep 9 00:29:57.933689 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:29:57.933700 kernel: Policy zone: DMA32 Sep 9 00:29:57.933716 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:29:57.933727 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:29:57.933745 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:29:57.933756 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:29:57.933767 kernel: Dynamic Preempt: voluntary Sep 9 00:29:57.933778 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:29:57.933790 kernel: rcu: RCU event tracing is enabled. Sep 9 00:29:57.933800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:29:57.933811 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:29:57.933827 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:29:57.933838 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:29:57.933850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:29:57.933866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:29:57.933877 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:29:57.933888 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:29:57.933900 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:29:57.933911 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:29:57.933922 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:29:57.933936 kernel: Console: colour dummy device 80x25 Sep 9 00:29:57.933947 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:29:57.933958 kernel: ACPI: Core revision 20240827 Sep 9 00:29:57.933969 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:29:57.933980 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:29:57.933991 kernel: x2apic enabled Sep 9 00:29:57.934002 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:29:57.934013 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:29:57.934024 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:29:57.934038 kernel: kvm-guest: setup PV IPIs Sep 9 00:29:57.934049 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:29:57.934060 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:29:57.934072 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:29:57.934092 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:29:57.934113 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:29:57.934124 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:29:57.934135 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:29:57.934162 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:29:57.934179 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:29:57.934190 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:29:57.934201 kernel: active return thunk: retbleed_return_thunk Sep 9 00:29:57.934212 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:29:57.934227 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:29:57.934238 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:29:57.934250 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:29:57.934262 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:29:57.934286 kernel: active return thunk: srso_return_thunk Sep 9 00:29:57.934297 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:29:57.934308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:29:57.934319 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:29:57.934330 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:29:57.934341 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:29:57.934352 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:29:57.934363 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:29:57.934375 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:29:57.934390 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:29:57.934400 kernel: landlock: Up and running. Sep 9 00:29:57.934412 kernel: SELinux: Initializing. Sep 9 00:29:57.934423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:29:57.934434 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:29:57.934446 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:29:57.934456 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:29:57.934467 kernel: ... version: 0 Sep 9 00:29:57.934478 kernel: ... bit width: 48 Sep 9 00:29:57.934492 kernel: ... generic registers: 6 Sep 9 00:29:57.934503 kernel: ... value mask: 0000ffffffffffff Sep 9 00:29:57.934514 kernel: ... max period: 00007fffffffffff Sep 9 00:29:57.934525 kernel: ... fixed-purpose events: 0 Sep 9 00:29:57.934536 kernel: ... event mask: 000000000000003f Sep 9 00:29:57.934547 kernel: signal: max sigframe size: 1776 Sep 9 00:29:57.934558 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:29:57.934573 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:29:57.934584 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:29:57.934595 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:29:57.934610 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:29:57.934621 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:29:57.934632 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:29:57.934653 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:29:57.934675 kernel: Memory: 2424724K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 9 00:29:57.934702 kernel: devtmpfs: initialized Sep 9 00:29:57.934722 kernel: x86/mm: Memory block size: 128MB Sep 9 00:29:57.934746 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:29:57.934758 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:29:57.934774 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:29:57.934785 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:29:57.934796 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:29:57.934808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:29:57.934828 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:29:57.934844 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:29:57.934864 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:29:57.934879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:29:57.934893 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:29:57.934904 kernel: audit: type=2000 audit(1757377794.742:1): state=initialized audit_enabled=0 res=1 Sep 9 00:29:57.934915 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:29:57.934926 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:29:57.934937 kernel: cpuidle: using governor menu Sep 9 00:29:57.934948 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:29:57.934959 kernel: dca service started, version 1.12.1 Sep 9 00:29:57.934970 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:29:57.934981 kernel: PCI: Using configuration type 1 for base access Sep 9 00:29:57.934996 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:29:57.935007 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:29:57.935017 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:29:57.935028 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:29:57.935040 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:29:57.935050 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:29:57.935061 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:29:57.935072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:29:57.935083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:29:57.935097 kernel: ACPI: Interpreter enabled Sep 9 00:29:57.935129 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:29:57.935140 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:29:57.935170 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:29:57.935181 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:29:57.935192 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:29:57.935203 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:29:57.935513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:29:57.935679 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:29:57.935836 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:29:57.935852 kernel: PCI host bridge to bus 0000:00 Sep 9 00:29:57.936027 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:29:57.936206 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:29:57.936358 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:29:57.936650 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:29:57.936835 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:29:57.936992 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:29:57.938037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:29:57.938292 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:29:57.938526 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:29:57.938689 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:29:57.938851 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:29:57.939007 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:29:57.939188 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:29:57.939367 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:29:57.939524 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:29:57.939677 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:29:57.939829 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:29:57.940033 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:29:57.940245 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:29:57.940408 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:29:57.940562 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:29:57.940783 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:29:57.940972 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:29:57.941139 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:29:57.941400 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:29:57.941597 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:29:57.941787 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:29:57.941949 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:29:57.942134 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:29:57.942322 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:29:57.942472 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:29:57.942671 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:29:57.942830 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:29:57.942847 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:29:57.942859 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:29:57.942871 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:29:57.942883 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:29:57.942894 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:29:57.942905 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:29:57.942921 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:29:57.942932 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:29:57.942943 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:29:57.942954 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:29:57.942966 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:29:57.942977 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:29:57.942989 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:29:57.943000 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:29:57.943011 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:29:57.943026 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:29:57.943037 kernel: iommu: Default domain type: Translated Sep 9 00:29:57.943048 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:29:57.943060 kernel: efivars: Registered efivars operations Sep 9 00:29:57.943071 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:29:57.943082 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:29:57.943094 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:29:57.943115 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:29:57.943126 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:29:57.943141 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:29:57.943172 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:29:57.943183 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:29:57.943195 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:29:57.943205 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:29:57.943364 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:29:57.943533 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:29:57.943685 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:29:57.943706 kernel: vgaarb: loaded Sep 9 00:29:57.943718 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:29:57.943729 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:29:57.943740 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:29:57.943751 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:29:57.943762 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:29:57.943774 kernel: pnp: PnP ACPI init Sep 9 00:29:57.944966 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:29:57.944995 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:29:57.945007 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:29:57.945018 kernel: NET: Registered PF_INET protocol family Sep 9 00:29:57.945029 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:29:57.945041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:29:57.945052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:29:57.945064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:29:57.945075 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:29:57.945090 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:29:57.945110 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:29:57.945122 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:29:57.945133 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:29:57.945144 kernel: NET: Registered PF_XDP protocol family Sep 9 00:29:57.945323 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:29:57.945479 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:29:57.945630 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:29:57.945774 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:29:57.945918 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:29:57.946057 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:29:57.946233 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:29:57.946373 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:29:57.946389 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:29:57.946402 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:29:57.946413 kernel: Initialise system trusted keyrings Sep 9 00:29:57.946430 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:29:57.946440 kernel: Key type asymmetric registered Sep 9 00:29:57.946452 kernel: Asymmetric key parser 'x509' registered Sep 9 00:29:57.946463 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:29:57.946475 kernel: io scheduler mq-deadline registered Sep 9 00:29:57.946486 kernel: io scheduler kyber registered Sep 9 00:29:57.946497 kernel: io scheduler bfq registered Sep 9 00:29:57.946514 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:29:57.946526 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:29:57.946538 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:29:57.946549 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:29:57.946561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:29:57.946572 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:29:57.946584 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:29:57.946596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:29:57.946607 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:29:57.946789 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:29:57.946940 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:29:57.947084 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:29:57 UTC (1757377797) Sep 9 00:29:57.947304 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:29:57.947322 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:29:57.947334 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:29:57.947346 kernel: efifb: probing for efifb Sep 9 00:29:57.947362 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:29:57.947373 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:29:57.947385 kernel: efifb: scrolling: redraw Sep 9 00:29:57.947396 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:29:57.947408 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:29:57.947419 kernel: fb0: EFI VGA frame buffer device Sep 9 00:29:57.947431 kernel: pstore: Using crash dump compression: deflate Sep 9 00:29:57.947442 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:29:57.947454 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:29:57.947465 kernel: Segment Routing with IPv6 Sep 9 00:29:57.947480 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:29:57.947491 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:29:57.947502 kernel: Key type dns_resolver registered Sep 9 00:29:57.947514 kernel: IPI shorthand broadcast: enabled Sep 9 00:29:57.947525 kernel: sched_clock: Marking stable (4078005948, 214603520)->(4338496279, -45886811) Sep 9 00:29:57.947537 kernel: registered taskstats version 1 Sep 9 00:29:57.947548 kernel: Loading compiled-in X.509 certificates Sep 9 00:29:57.947560 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:29:57.947571 kernel: Demotion targets for Node 0: null Sep 9 00:29:57.947586 kernel: Key type .fscrypt registered Sep 9 00:29:57.947597 kernel: Key type fscrypt-provisioning registered Sep 9 00:29:57.947608 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:29:57.947620 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:29:57.947631 kernel: ima: No architecture policies found Sep 9 00:29:57.947643 kernel: clk: Disabling unused clocks Sep 9 00:29:57.947654 kernel: Warning: unable to open an initial console. Sep 9 00:29:57.947666 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:29:57.947681 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:29:57.947692 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:29:57.947704 kernel: Run /init as init process Sep 9 00:29:57.947715 kernel: with arguments: Sep 9 00:29:57.947726 kernel: /init Sep 9 00:29:57.947737 kernel: with environment: Sep 9 00:29:57.947748 kernel: HOME=/ Sep 9 00:29:57.947759 kernel: TERM=linux Sep 9 00:29:57.947770 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:29:57.947783 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:29:57.947802 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:29:57.947816 systemd[1]: Detected virtualization kvm. Sep 9 00:29:57.947827 systemd[1]: Detected architecture x86-64. Sep 9 00:29:57.947840 systemd[1]: Running in initrd. Sep 9 00:29:57.947851 systemd[1]: No hostname configured, using default hostname. Sep 9 00:29:57.947864 systemd[1]: Hostname set to . Sep 9 00:29:57.947879 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:29:57.947892 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:29:57.947904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:29:57.947916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:29:57.947930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:29:57.947942 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:29:57.947954 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:29:57.947968 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:29:57.947985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:29:57.947997 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:29:57.948010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:29:57.948022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:29:57.948037 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:29:57.948050 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:29:57.948062 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:29:57.948075 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:29:57.948090 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:29:57.948114 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:29:57.948128 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:29:57.948143 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:29:57.948174 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:29:57.948190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:29:57.948203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:29:57.948215 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:29:57.948230 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:29:57.948242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:29:57.948254 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:29:57.948267 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:29:57.948280 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:29:57.948292 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:29:57.948304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:29:57.948316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:29:57.948329 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:29:57.948345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:29:57.948358 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:29:57.948370 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:29:57.948417 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:29:57.948450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:29:57.948463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:29:57.948476 systemd-journald[220]: Journal started Sep 9 00:29:57.948504 systemd-journald[220]: Runtime Journal (/run/log/journal/b98e203d64ec48bbbae7df894118650c) is 6M, max 48.5M, 42.4M free. Sep 9 00:29:57.931669 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 00:29:57.952998 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:29:57.953436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:29:57.956688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:29:57.964405 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:29:57.970181 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:29:57.972312 kernel: Bridge firewalling registered Sep 9 00:29:57.971801 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 00:29:57.973765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:29:57.975551 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:29:57.977223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:29:57.979478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:29:57.994448 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:29:57.996346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:29:58.001131 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:29:58.006833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:29:58.013944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:29:58.030763 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:29:58.072014 systemd-resolved[262]: Positive Trust Anchors: Sep 9 00:29:58.072040 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:29:58.072075 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:29:58.075821 systemd-resolved[262]: Defaulting to hostname 'linux'. Sep 9 00:29:58.077400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:29:58.083513 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:29:58.158188 kernel: SCSI subsystem initialized Sep 9 00:29:58.168179 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:29:58.179180 kernel: iscsi: registered transport (tcp) Sep 9 00:29:58.201234 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:29:58.201301 kernel: QLogic iSCSI HBA Driver Sep 9 00:29:58.222219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:29:58.249657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:29:58.251290 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:29:58.320473 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:29:58.322985 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:29:58.390209 kernel: raid6: avx2x4 gen() 27407 MB/s Sep 9 00:29:58.407211 kernel: raid6: avx2x2 gen() 21323 MB/s Sep 9 00:29:58.424342 kernel: raid6: avx2x1 gen() 24502 MB/s Sep 9 00:29:58.424433 kernel: raid6: using algorithm avx2x4 gen() 27407 MB/s Sep 9 00:29:58.442297 kernel: raid6: .... xor() 7121 MB/s, rmw enabled Sep 9 00:29:58.442393 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:29:58.464185 kernel: xor: automatically using best checksumming function avx Sep 9 00:29:58.640214 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:29:58.650918 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:29:58.653643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:29:58.688293 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 00:29:58.695006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:29:58.697501 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:29:58.728470 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 9 00:29:58.760263 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:29:58.761941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:29:58.846714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:29:58.849777 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:29:58.886185 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:29:58.890807 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:29:58.898917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:29:58.898985 kernel: GPT:9289727 != 19775487 Sep 9 00:29:58.899000 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:29:58.899015 kernel: GPT:9289727 != 19775487 Sep 9 00:29:58.900925 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:29:58.901000 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:29:58.906182 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:29:58.934212 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:29:58.937209 kernel: libata version 3.00 loaded. Sep 9 00:29:58.940853 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:29:58.946382 kernel: AES CTR mode by8 optimization enabled Sep 9 00:29:58.941865 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:29:58.944669 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:29:58.946991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:29:58.958523 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:29:58.966666 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:29:58.966895 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:29:58.971945 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:29:58.972201 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:29:58.972380 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:29:58.989174 kernel: scsi host0: ahci Sep 9 00:29:58.990503 kernel: scsi host1: ahci Sep 9 00:29:58.992248 kernel: scsi host2: ahci Sep 9 00:29:58.996176 kernel: scsi host3: ahci Sep 9 00:29:58.997183 kernel: scsi host4: ahci Sep 9 00:29:58.998812 kernel: scsi host5: ahci Sep 9 00:29:58.998980 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:29:58.998993 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:29:58.999751 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:29:59.001545 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:29:59.001561 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:29:59.002907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:29:59.006609 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:29:59.016323 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:29:59.025445 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:29:59.028608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:29:59.040017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:29:59.043251 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:29:59.045944 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:29:59.046012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:29:59.049754 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:29:59.063348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:29:59.066123 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:29:59.072610 disk-uuid[635]: Primary Header is updated. Sep 9 00:29:59.072610 disk-uuid[635]: Secondary Entries is updated. Sep 9 00:29:59.072610 disk-uuid[635]: Secondary Header is updated. Sep 9 00:29:59.077167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:29:59.106171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:29:59.312188 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:29:59.312278 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:29:59.313205 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:29:59.314192 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:29:59.315209 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:29:59.315241 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:29:59.316325 kernel: ata3.00: applying bridge limits Sep 9 00:29:59.317184 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:29:59.318189 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:29:59.318211 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:29:59.319203 kernel: ata3.00: configured for UDMA/100 Sep 9 00:29:59.329177 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:29:59.383219 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:29:59.383550 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:29:59.409193 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:29:59.824239 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:29:59.825458 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:29:59.826822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:29:59.827175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:29:59.828602 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:29:59.853022 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:30:00.084268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:30:00.085430 disk-uuid[636]: The operation has completed successfully. Sep 9 00:30:00.116333 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:30:00.116477 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:30:00.149790 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:30:00.172774 sh[670]: Success Sep 9 00:30:00.191947 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:30:00.192017 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:30:00.192031 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:30:00.201200 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:30:00.233960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:30:00.237048 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:30:00.260113 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:30:00.267160 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (682) Sep 9 00:30:00.267206 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:30:00.267218 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:30:00.273422 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:30:00.273468 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:30:00.274942 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:30:00.277232 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:30:00.279542 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:30:00.280534 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:30:00.283134 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:30:00.309187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Sep 9 00:30:00.309219 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:30:00.310225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:30:00.313280 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:30:00.313341 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:30:00.318165 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:30:00.320024 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:30:00.321516 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:30:00.408050 ignition[759]: Ignition 2.21.0 Sep 9 00:30:00.408067 ignition[759]: Stage: fetch-offline Sep 9 00:30:00.408102 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:00.408111 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:00.408215 ignition[759]: parsed url from cmdline: "" Sep 9 00:30:00.408219 ignition[759]: no config URL provided Sep 9 00:30:00.408224 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:30:00.408233 ignition[759]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:30:00.408258 ignition[759]: op(1): [started] loading QEMU firmware config module Sep 9 00:30:00.408263 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:30:00.415974 ignition[759]: op(1): [finished] loading QEMU firmware config module Sep 9 00:30:00.431415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:30:00.434656 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:30:00.459997 ignition[759]: parsing config with SHA512: 643d5e1d4c42f3334cb61e4d168d1a43a86330b54082e6f952d55dccfa519546234b341cc2ff7c46e3e07fffe940a00755ee88f73cd1db9cf786ef3c224eb68c Sep 9 00:30:00.466193 unknown[759]: fetched base config from "system" Sep 9 00:30:00.466208 unknown[759]: fetched user config from "qemu" Sep 9 00:30:00.466516 ignition[759]: fetch-offline: fetch-offline passed Sep 9 00:30:00.466569 ignition[759]: Ignition finished successfully Sep 9 00:30:00.469618 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:30:00.488396 systemd-networkd[860]: lo: Link UP Sep 9 00:30:00.488409 systemd-networkd[860]: lo: Gained carrier Sep 9 00:30:00.490056 systemd-networkd[860]: Enumeration completed Sep 9 00:30:00.490244 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:30:00.490488 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:30:00.490493 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:30:00.492913 systemd-networkd[860]: eth0: Link UP Sep 9 00:30:00.493090 systemd-networkd[860]: eth0: Gained carrier Sep 9 00:30:00.493099 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:30:00.494491 systemd[1]: Reached target network.target - Network. Sep 9 00:30:00.494745 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:30:00.499581 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:30:00.520258 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:30:00.534713 ignition[864]: Ignition 2.21.0 Sep 9 00:30:00.534731 ignition[864]: Stage: kargs Sep 9 00:30:00.534873 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:00.534884 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:00.535577 ignition[864]: kargs: kargs passed Sep 9 00:30:00.535626 ignition[864]: Ignition finished successfully Sep 9 00:30:00.541189 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:30:00.544421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:30:00.586687 ignition[873]: Ignition 2.21.0 Sep 9 00:30:00.586708 ignition[873]: Stage: disks Sep 9 00:30:00.586849 ignition[873]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:00.586859 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:00.587557 ignition[873]: disks: disks passed Sep 9 00:30:00.590631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:30:00.587607 ignition[873]: Ignition finished successfully Sep 9 00:30:00.591578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:30:00.593499 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:30:00.593780 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:30:00.594135 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:30:00.594610 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:30:00.596170 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:30:00.641340 systemd-fsck[884]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:30:00.654809 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:30:00.661192 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:30:00.682273 systemd-resolved[262]: Detected conflict on linux IN A 10.0.0.142 Sep 9 00:30:00.682295 systemd-resolved[262]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Sep 9 00:30:00.819185 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:30:00.819817 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:30:00.821553 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:30:00.825433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:30:00.827491 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:30:00.828761 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:30:00.828818 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:30:00.828847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:30:00.846971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:30:00.849722 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:30:00.857872 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (892) Sep 9 00:30:00.857908 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:30:00.857925 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:30:00.862593 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:30:00.862674 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:30:00.865779 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:30:00.898565 initrd-setup-root[916]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:30:00.904085 initrd-setup-root[923]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:30:00.909480 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:30:00.914075 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:30:01.049889 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:30:01.052256 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:30:01.053868 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:30:01.093440 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:30:01.106695 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:30:01.124666 ignition[1006]: INFO : Ignition 2.21.0 Sep 9 00:30:01.124666 ignition[1006]: INFO : Stage: mount Sep 9 00:30:01.126604 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:01.126604 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:01.126604 ignition[1006]: INFO : mount: mount passed Sep 9 00:30:01.126604 ignition[1006]: INFO : Ignition finished successfully Sep 9 00:30:01.128562 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:30:01.131227 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:30:01.266664 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:30:01.268915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:30:01.301949 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1018) Sep 9 00:30:01.302027 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:30:01.302039 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:30:01.306182 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:30:01.306223 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:30:01.308181 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:30:01.342292 ignition[1035]: INFO : Ignition 2.21.0 Sep 9 00:30:01.342292 ignition[1035]: INFO : Stage: files Sep 9 00:30:01.344243 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:01.344243 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:01.344243 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:30:01.344243 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:30:01.344243 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:30:01.350850 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:30:01.350850 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:30:01.350850 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:30:01.350850 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:30:01.350850 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:30:01.348607 unknown[1035]: wrote ssh authorized keys file for user: core Sep 9 00:30:02.065458 systemd-networkd[860]: eth0: Gained IPv6LL Sep 9 00:30:09.027369 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:30:11.618823 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:30:11.621390 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:30:11.736898 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:30:11.739916 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:30:11.739916 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:30:11.744980 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:30:11.744980 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:30:11.744980 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:30:12.326633 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:30:13.183880 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:30:13.183880 ignition[1035]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:30:13.188335 ignition[1035]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:30:13.191022 ignition[1035]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:30:13.191022 ignition[1035]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:30:13.191022 ignition[1035]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:30:13.195634 ignition[1035]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:30:13.195634 ignition[1035]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:30:13.195634 ignition[1035]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:30:13.195634 ignition[1035]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:30:13.216526 ignition[1035]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:30:13.223610 ignition[1035]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:30:13.225233 ignition[1035]: INFO : files: files passed Sep 9 00:30:13.225233 ignition[1035]: INFO : Ignition finished successfully Sep 9 00:30:13.231300 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:30:13.233631 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:30:13.239560 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:30:13.254986 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:30:13.255212 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:30:13.258633 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:30:13.263777 initrd-setup-root-after-ignition[1066]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:30:13.263777 initrd-setup-root-after-ignition[1066]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:30:13.267486 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:30:13.266831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:30:13.268680 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:30:13.271949 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:30:13.338615 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:30:13.338801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:30:13.339794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:30:13.342483 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:30:13.342866 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:30:13.343917 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:30:13.363202 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:30:13.365980 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:30:13.398586 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:30:13.401067 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:30:13.401727 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:30:13.403893 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:30:13.404070 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:30:13.407419 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:30:13.407967 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:30:13.408459 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:30:13.408809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:30:13.409137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:30:13.409689 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:30:13.410035 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:30:13.410584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:30:13.410959 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:30:13.411497 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:30:13.411850 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:30:13.412160 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:30:13.412288 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:30:13.413027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:30:13.413547 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:30:13.413877 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:30:13.414054 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:30:13.435790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:30:13.435973 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:30:13.439715 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:30:13.439885 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:30:13.440512 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:30:13.442960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:30:13.449262 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:30:13.452208 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:30:13.452609 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:30:13.452949 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:30:13.453061 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:30:13.456316 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:30:13.456501 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:30:13.458371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:30:13.458553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:30:13.459457 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:30:13.459595 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:30:13.465067 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:30:13.465705 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:30:13.465842 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:30:13.469786 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:30:13.472661 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:30:13.472827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:30:13.474259 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:30:13.474388 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:30:13.483445 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:30:13.487359 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:30:13.506321 ignition[1090]: INFO : Ignition 2.21.0 Sep 9 00:30:13.506321 ignition[1090]: INFO : Stage: umount Sep 9 00:30:13.508125 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:30:13.508125 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:30:13.508125 ignition[1090]: INFO : umount: umount passed Sep 9 00:30:13.508125 ignition[1090]: INFO : Ignition finished successfully Sep 9 00:30:13.510761 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:30:13.510946 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:30:13.513212 systemd[1]: Stopped target network.target - Network. Sep 9 00:30:13.515067 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:30:13.515136 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:30:13.516429 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:30:13.516489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:30:13.517072 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:30:13.517131 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:30:13.517590 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:30:13.517642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:30:13.518034 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:30:13.523505 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:30:13.525027 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:30:13.531842 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:30:13.532028 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:30:13.537322 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:30:13.537725 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:30:13.537859 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:30:13.539481 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:30:13.539554 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:30:13.541098 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:30:13.541213 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:30:13.545846 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:30:13.546145 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:30:13.546321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:30:13.549748 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:30:13.550285 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:30:13.550744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:30:13.550818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:30:13.554497 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:30:13.555358 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:30:13.555418 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:30:13.555758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:30:13.555814 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:30:13.561513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:30:13.561571 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:30:13.562073 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:30:13.563705 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:30:13.581287 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:30:13.581426 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:30:13.585246 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:30:13.585478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:30:13.586182 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:30:13.586246 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:30:13.588982 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:30:13.589027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:30:13.589414 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:30:13.589470 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:30:13.590110 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:30:13.590171 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:30:13.590912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:30:13.590962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:30:13.592896 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:30:13.601639 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:30:13.601755 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:30:13.605599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:30:13.605657 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:30:13.608915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:30:13.608971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:30:13.626986 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:30:13.627135 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:30:13.627823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:30:13.632602 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:30:13.667684 systemd[1]: Switching root. Sep 9 00:30:13.716987 systemd-journald[220]: Journal stopped Sep 9 00:30:15.516129 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:30:15.516230 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:30:15.517639 kernel: SELinux: policy capability open_perms=1 Sep 9 00:30:15.517663 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:30:15.517679 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:30:15.517694 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:30:15.517720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:30:15.517745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:30:15.517766 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:30:15.517785 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:30:15.517801 kernel: audit: type=1403 audit(1757377814.482:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:30:15.517818 systemd[1]: Successfully loaded SELinux policy in 50.383ms. Sep 9 00:30:15.517844 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.342ms. Sep 9 00:30:15.517862 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:30:15.517879 systemd[1]: Detected virtualization kvm. Sep 9 00:30:15.517898 systemd[1]: Detected architecture x86-64. Sep 9 00:30:15.517914 systemd[1]: Detected first boot. Sep 9 00:30:15.517930 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:30:15.517947 zram_generator::config[1137]: No configuration found. Sep 9 00:30:15.517964 kernel: Guest personality initialized and is inactive Sep 9 00:30:15.517984 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:30:15.517999 kernel: Initialized host personality Sep 9 00:30:15.518026 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:30:15.518981 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:30:15.519012 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:30:15.519030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:30:15.519046 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:30:15.519063 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:30:15.519079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:30:15.519096 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:30:15.519118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:30:15.519135 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:30:15.519168 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:30:15.519189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:30:15.519206 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:30:15.519222 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:30:15.519239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:30:15.519255 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:30:15.519272 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:30:15.519288 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:30:15.519305 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:30:15.519326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:30:15.519342 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:30:15.519358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:30:15.519375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:30:15.519391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:30:15.519409 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:30:15.519425 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:30:15.519442 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:30:15.519463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:30:15.519479 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:30:15.519494 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:30:15.519510 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:30:15.519525 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:30:15.519542 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:30:15.519557 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:30:15.519573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:30:15.519597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:30:15.519617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:30:15.519633 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:30:15.519652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:30:15.519672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:30:15.519692 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:30:15.519712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:15.519733 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:30:15.519754 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:30:15.519774 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:30:15.519802 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:30:15.519822 systemd[1]: Reached target machines.target - Containers. Sep 9 00:30:15.519843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:30:15.519863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:30:15.519880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:30:15.519896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:30:15.519912 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:30:15.519928 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:30:15.519948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:30:15.519964 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:30:15.519980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:30:15.519997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:30:15.520013 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:30:15.520030 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:30:15.520046 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:30:15.520062 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:30:15.520079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:30:15.520098 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:30:15.520115 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:30:15.520132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:30:15.520165 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:30:15.520215 systemd-journald[1201]: Collecting audit messages is disabled. Sep 9 00:30:15.520248 kernel: fuse: init (API version 7.41) Sep 9 00:30:15.520265 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:30:15.520281 kernel: loop: module loaded Sep 9 00:30:15.520297 systemd-journald[1201]: Journal started Sep 9 00:30:15.520328 systemd-journald[1201]: Runtime Journal (/run/log/journal/b98e203d64ec48bbbae7df894118650c) is 6M, max 48.5M, 42.4M free. Sep 9 00:30:15.140551 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:30:15.166867 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:30:15.167413 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:30:15.557385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:30:15.560529 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:30:15.560564 systemd[1]: Stopped verity-setup.service. Sep 9 00:30:15.564189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:15.567169 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:30:15.568932 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:30:15.570248 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:30:15.571579 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:30:15.572791 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:30:15.574106 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:30:15.575509 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:30:15.579675 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:30:15.581427 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:30:15.581649 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:30:15.583294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:30:15.583570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:30:15.585271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:30:15.585470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:30:15.587133 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:30:15.587421 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:30:15.589049 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:30:15.589359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:30:15.591262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:30:15.593029 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:30:15.597365 kernel: ACPI: bus type drm_connector registered Sep 9 00:30:15.598729 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:30:15.598951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:30:15.606960 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:30:15.609940 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:30:15.612500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:30:15.614566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:30:15.615931 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:30:15.616020 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:30:15.618172 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:30:15.629264 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:30:15.630702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:30:15.632956 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:30:15.637374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:30:15.639256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:30:15.641302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:30:15.643273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:30:15.644625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:30:15.648013 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:30:15.651645 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:30:15.655550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:30:15.655714 systemd-journald[1201]: Time spent on flushing to /var/log/journal/b98e203d64ec48bbbae7df894118650c is 18.951ms for 1070 entries. Sep 9 00:30:15.655714 systemd-journald[1201]: System Journal (/var/log/journal/b98e203d64ec48bbbae7df894118650c) is 8M, max 195.6M, 187.6M free. Sep 9 00:30:16.048345 systemd-journald[1201]: Received client request to flush runtime journal. Sep 9 00:30:16.048393 kernel: loop0: detected capacity change from 0 to 229808 Sep 9 00:30:16.048410 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:30:16.048423 kernel: loop1: detected capacity change from 0 to 146240 Sep 9 00:30:16.048454 kernel: loop2: detected capacity change from 0 to 113872 Sep 9 00:30:16.048467 kernel: loop3: detected capacity change from 0 to 229808 Sep 9 00:30:16.048480 kernel: loop4: detected capacity change from 0 to 146240 Sep 9 00:30:16.048494 kernel: loop5: detected capacity change from 0 to 113872 Sep 9 00:30:16.048507 zram_generator::config[1288]: No configuration found. Sep 9 00:30:15.658745 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:30:15.664417 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:30:15.675357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:30:15.888705 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:30:15.889310 (sd-merge)[1260]: Merged extensions into '/usr'. Sep 9 00:30:15.894886 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:30:15.894896 systemd[1]: Reloading... Sep 9 00:30:16.063050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:30:16.151857 systemd[1]: Reloading finished in 256 ms. Sep 9 00:30:16.187541 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:30:16.189366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:30:16.191048 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:30:16.199295 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:30:16.209489 systemd[1]: Starting ensure-sysext.service... Sep 9 00:30:16.211756 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:30:16.232023 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:30:16.232039 systemd[1]: Reloading... Sep 9 00:30:16.277466 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:30:16.398360 zram_generator::config[1364]: No configuration found. Sep 9 00:30:16.592698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:30:16.677265 systemd[1]: Reloading finished in 444 ms. Sep 9 00:30:16.697568 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:30:16.753492 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:30:16.778370 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.778587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:30:16.791426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:30:16.793860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:30:16.798375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:30:16.799599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:30:16.799823 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:30:16.800094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.801540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:30:16.801840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:30:16.807572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:30:16.807790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:30:16.809621 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:30:16.809836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:30:16.816868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.817147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:30:16.818935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:30:16.821569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:30:16.832701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:30:16.834191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:30:16.834406 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:30:16.834576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.836393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:30:16.836703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:30:16.838472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:30:16.838730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:30:16.842207 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:30:16.842480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:30:16.847859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.848095 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:30:16.849487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:30:16.851718 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:30:16.853861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:30:16.863131 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:30:16.864519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:30:16.864653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:30:16.864832 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:30:16.866376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:30:16.866648 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:30:16.868703 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:30:16.868948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:30:16.870519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:30:16.870762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:30:16.872506 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:30:16.872760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:30:16.877424 systemd[1]: Finished ensure-sysext.service. Sep 9 00:30:16.946800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:30:16.946885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:30:17.101706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:30:17.363531 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:30:17.366320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:30:17.368815 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:30:17.394010 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:30:17.394045 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:30:17.394318 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:30:17.394642 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:30:17.395795 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:30:17.395939 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 9 00:30:17.395954 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Sep 9 00:30:17.396208 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Sep 9 00:30:17.396326 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Sep 9 00:30:17.400457 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:30:17.400469 systemd-tmpfiles[1428]: Skipping /boot Sep 9 00:30:17.402683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:30:17.414042 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:30:17.414056 systemd-tmpfiles[1428]: Skipping /boot Sep 9 00:30:17.480103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:30:17.483700 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:30:17.486581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:30:17.490901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:30:17.498848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:30:17.503487 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:30:17.508004 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:30:17.511731 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:30:17.512795 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:30:17.514694 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:30:17.523294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:30:17.553374 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:30:17.561371 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:30:17.569504 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:30:17.574015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:30:17.596997 systemd-udevd[1448]: Using default interface naming scheme 'v255'. Sep 9 00:30:17.601534 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:30:17.609975 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:30:17.613956 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:30:17.615457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:30:17.622910 augenrules[1472]: No rules Sep 9 00:30:17.624910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:30:17.669877 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:30:17.670428 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:30:17.683698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:30:17.745650 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:30:17.810663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:30:17.813080 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:30:17.813615 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:30:17.841188 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:30:17.846390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:30:17.854992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:30:17.857207 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:30:17.857259 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:30:17.873104 systemd-resolved[1436]: Positive Trust Anchors: Sep 9 00:30:17.873137 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:30:17.873194 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:30:17.874294 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:30:17.874592 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:30:17.874803 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:30:17.877562 systemd-resolved[1436]: Defaulting to hostname 'linux'. Sep 9 00:30:17.880795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:30:17.884813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:30:17.887271 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:30:17.888542 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:30:17.889882 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:30:17.891163 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:30:17.892597 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:30:17.895387 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:30:17.896931 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:30:17.898801 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:30:17.898844 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:30:17.899791 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:30:17.901981 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:30:17.904902 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:30:17.911710 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:30:17.914445 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:30:17.916266 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:30:17.926597 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:30:17.929840 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:30:17.929994 systemd-networkd[1506]: lo: Link UP Sep 9 00:30:17.930000 systemd-networkd[1506]: lo: Gained carrier Sep 9 00:30:17.932011 systemd-networkd[1506]: Enumeration completed Sep 9 00:30:17.932107 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:30:17.932841 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:30:17.932853 systemd-networkd[1506]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:30:17.933765 systemd-networkd[1506]: eth0: Link UP Sep 9 00:30:17.933991 systemd-networkd[1506]: eth0: Gained carrier Sep 9 00:30:17.934015 systemd-networkd[1506]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:30:17.934280 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:30:17.937341 systemd[1]: Reached target network.target - Network. Sep 9 00:30:17.939240 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:30:17.940274 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:30:17.941303 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:30:17.941341 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:30:17.946303 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:30:17.950243 systemd-networkd[1506]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:30:17.952269 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Sep 9 00:30:19.214121 systemd-resolved[1436]: Clock change detected. Flushing caches. Sep 9 00:30:19.214150 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:30:19.214203 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:30:19.214245 systemd-timesyncd[1438]: Initial clock synchronization to Tue 2025-09-09 00:30:19.214088 UTC. Sep 9 00:30:19.221476 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:30:19.225988 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:30:19.229557 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:30:19.230585 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:30:19.236800 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:30:19.238984 jq[1541]: false Sep 9 00:30:19.272793 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:30:19.283592 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:30:19.288780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:30:19.293934 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:30:19.299045 oslogin_cache_refresh[1543]: Refreshing passwd entry cache Sep 9 00:30:19.303569 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing passwd entry cache Sep 9 00:30:19.307728 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:30:19.310561 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:30:19.313378 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting users, quitting Sep 9 00:30:19.313378 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:30:19.313378 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing group entry cache Sep 9 00:30:19.313486 extend-filesystems[1542]: Found /dev/vda6 Sep 9 00:30:19.313176 oslogin_cache_refresh[1543]: Failure getting users, quitting Sep 9 00:30:19.313198 oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:30:19.313258 oslogin_cache_refresh[1543]: Refreshing group entry cache Sep 9 00:30:19.315649 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:30:19.318593 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:30:19.321387 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting groups, quitting Sep 9 00:30:19.321387 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:30:19.320568 oslogin_cache_refresh[1543]: Failure getting groups, quitting Sep 9 00:30:19.320580 oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:30:19.323010 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:30:19.324250 extend-filesystems[1542]: Found /dev/vda9 Sep 9 00:30:19.325142 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:30:19.331420 extend-filesystems[1542]: Checking size of /dev/vda9 Sep 9 00:30:19.333119 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:30:19.338100 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:30:19.340111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:30:19.342382 extend-filesystems[1542]: Resized partition /dev/vda9 Sep 9 00:30:19.345008 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:30:19.345451 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:30:19.345698 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:30:19.346378 kernel: kvm_amd: TSC scaling supported Sep 9 00:30:19.346406 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:30:19.346419 kernel: kvm_amd: Nested Paging enabled Sep 9 00:30:19.346431 kernel: kvm_amd: LBR virtualization supported Sep 9 00:30:19.350120 jq[1567]: true Sep 9 00:30:19.351006 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:30:19.351277 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:30:19.351924 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:30:19.351961 kernel: kvm_amd: Virtual GIF supported Sep 9 00:30:19.355053 extend-filesystems[1573]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:30:19.356877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:30:19.357193 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:30:19.374910 update_engine[1562]: I20250909 00:30:19.371824 1562 main.cc:92] Flatcar Update Engine starting Sep 9 00:30:19.391766 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:30:19.410636 jq[1575]: true Sep 9 00:30:19.421521 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:30:19.430389 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:30:19.445865 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:30:19.460647 tar[1574]: linux-amd64/LICENSE Sep 9 00:30:19.461006 tar[1574]: linux-amd64/helm Sep 9 00:30:19.480903 dbus-daemon[1536]: [system] SELinux support is enabled Sep 9 00:30:19.486931 update_engine[1562]: I20250909 00:30:19.486881 1562 update_check_scheduler.cc:74] Next update check in 11m23s Sep 9 00:30:19.488165 systemd-logind[1554]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:30:19.488192 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:30:19.491590 systemd-logind[1554]: New seat seat0. Sep 9 00:30:19.498397 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:30:19.500927 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:30:19.510388 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:30:19.515846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:30:19.573565 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:30:19.517273 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:30:19.515881 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:30:19.516161 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:30:19.516176 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:30:19.517472 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:30:19.520529 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:30:19.574104 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:30:19.574104 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:30:19.574104 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:30:19.584456 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Sep 9 00:30:19.577490 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:30:19.586778 bash[1607]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:30:19.577891 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:30:19.615945 locksmithd[1609]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:30:19.628957 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:30:19.632510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:30:19.641754 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:30:19.719213 containerd[1582]: time="2025-09-09T00:30:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:30:19.720878 containerd[1582]: time="2025-09-09T00:30:19.720831147Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733440574Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.03µs" Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733486189Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733504273Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733766184Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733782736Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733809235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733875449Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.733886039Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.734197624Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.734211049Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.734221017Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734387 containerd[1582]: time="2025-09-09T00:30:19.734228391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734788 containerd[1582]: time="2025-09-09T00:30:19.734318350Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734788 containerd[1582]: time="2025-09-09T00:30:19.734573088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734788 containerd[1582]: time="2025-09-09T00:30:19.734602803Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:30:19.734788 containerd[1582]: time="2025-09-09T00:30:19.734614776Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:30:19.734788 containerd[1582]: time="2025-09-09T00:30:19.734663738Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:30:19.735023 containerd[1582]: time="2025-09-09T00:30:19.734989739Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:30:19.735097 containerd[1582]: time="2025-09-09T00:30:19.735067174Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:30:19.741867 containerd[1582]: time="2025-09-09T00:30:19.741791835Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:30:19.741867 containerd[1582]: time="2025-09-09T00:30:19.741840186Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:30:19.741867 containerd[1582]: time="2025-09-09T00:30:19.741854012Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:30:19.741867 containerd[1582]: time="2025-09-09T00:30:19.741873459Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:30:19.741867 containerd[1582]: time="2025-09-09T00:30:19.741884429Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741896582Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741909025Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741921258Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741930265Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741944452Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741954390Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:30:19.742139 containerd[1582]: time="2025-09-09T00:30:19.741969789Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742190663Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742216211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742234726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742246839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742259082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742272236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742286724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742299568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:30:19.742319 containerd[1582]: time="2025-09-09T00:30:19.742318443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:30:19.742602 containerd[1582]: time="2025-09-09T00:30:19.742358999Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:30:19.742602 containerd[1582]: time="2025-09-09T00:30:19.742377804Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:30:19.742602 containerd[1582]: time="2025-09-09T00:30:19.742453316Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:30:19.742602 containerd[1582]: time="2025-09-09T00:30:19.742467543Z" level=info msg="Start snapshots syncer" Sep 9 00:30:19.742602 containerd[1582]: time="2025-09-09T00:30:19.742507287Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:30:19.742899 containerd[1582]: time="2025-09-09T00:30:19.742842616Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:30:19.743005 containerd[1582]: time="2025-09-09T00:30:19.742903190Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:30:19.743027 containerd[1582]: time="2025-09-09T00:30:19.743002997Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:30:19.743189 containerd[1582]: time="2025-09-09T00:30:19.743154240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:30:19.743189 containerd[1582]: time="2025-09-09T00:30:19.743184247Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:30:19.743234 containerd[1582]: time="2025-09-09T00:30:19.743198033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:30:19.743234 containerd[1582]: time="2025-09-09T00:30:19.743213822Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:30:19.743234 containerd[1582]: time="2025-09-09T00:30:19.743227638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:30:19.743297 containerd[1582]: time="2025-09-09T00:30:19.743239971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:30:19.743297 containerd[1582]: time="2025-09-09T00:30:19.743252575Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:30:19.743297 containerd[1582]: time="2025-09-09T00:30:19.743284926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:30:19.743363 containerd[1582]: time="2025-09-09T00:30:19.743297740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:30:19.743363 containerd[1582]: time="2025-09-09T00:30:19.743310373Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:30:19.743399 containerd[1582]: time="2025-09-09T00:30:19.743372991Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:30:19.743399 containerd[1582]: time="2025-09-09T00:30:19.743388209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:30:19.743399 containerd[1582]: time="2025-09-09T00:30:19.743397116Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:30:19.743525 containerd[1582]: time="2025-09-09T00:30:19.743406303Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:30:19.743525 containerd[1582]: time="2025-09-09T00:30:19.743510879Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:30:19.743573 containerd[1582]: time="2025-09-09T00:30:19.743529113Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:30:19.743573 containerd[1582]: time="2025-09-09T00:30:19.743544743Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:30:19.743573 containerd[1582]: time="2025-09-09T00:30:19.743564259Z" level=info msg="runtime interface created" Sep 9 00:30:19.743573 containerd[1582]: time="2025-09-09T00:30:19.743570681Z" level=info msg="created NRI interface" Sep 9 00:30:19.743641 containerd[1582]: time="2025-09-09T00:30:19.743578877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:30:19.743641 containerd[1582]: time="2025-09-09T00:30:19.743589787Z" level=info msg="Connect containerd service" Sep 9 00:30:19.746548 containerd[1582]: time="2025-09-09T00:30:19.745263396Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:30:19.746548 containerd[1582]: time="2025-09-09T00:30:19.746164355Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:30:19.884196 containerd[1582]: time="2025-09-09T00:30:19.884141913Z" level=info msg="Start subscribing containerd event" Sep 9 00:30:19.884441 containerd[1582]: time="2025-09-09T00:30:19.884405216Z" level=info msg="Start recovering state" Sep 9 00:30:19.884540 containerd[1582]: time="2025-09-09T00:30:19.884496438Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:30:19.884657 containerd[1582]: time="2025-09-09T00:30:19.884583711Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:30:19.884697 containerd[1582]: time="2025-09-09T00:30:19.884614339Z" level=info msg="Start event monitor" Sep 9 00:30:19.886609 containerd[1582]: time="2025-09-09T00:30:19.884702033Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:30:19.886654 containerd[1582]: time="2025-09-09T00:30:19.886606475Z" level=info msg="Start streaming server" Sep 9 00:30:19.886654 containerd[1582]: time="2025-09-09T00:30:19.886630229Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:30:19.886654 containerd[1582]: time="2025-09-09T00:30:19.886641901Z" level=info msg="runtime interface starting up..." Sep 9 00:30:19.886654 containerd[1582]: time="2025-09-09T00:30:19.886654114Z" level=info msg="starting plugins..." Sep 9 00:30:19.886767 containerd[1582]: time="2025-09-09T00:30:19.886693157Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:30:19.886972 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:30:19.888258 containerd[1582]: time="2025-09-09T00:30:19.888224549Z" level=info msg="containerd successfully booted in 0.169752s" Sep 9 00:30:20.033944 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:30:20.139373 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:30:20.174657 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:30:20.205108 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:30:20.205516 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:30:20.208922 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:30:20.392814 tar[1574]: linux-amd64/README.md Sep 9 00:30:20.399494 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:30:20.402939 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:30:20.415562 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:30:20.416870 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:30:20.422203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:30:20.542641 systemd-networkd[1506]: eth0: Gained IPv6LL Sep 9 00:30:20.546306 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:30:20.548374 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:30:20.551314 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:30:20.554406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:30:20.557067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:30:20.599561 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:30:20.601819 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:30:20.602091 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:30:20.604835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:30:21.994776 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:30:21.997873 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:40532.service - OpenSSH per-connection server daemon (10.0.0.1:40532). Sep 9 00:30:22.152095 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 40532 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:22.154308 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:22.167926 systemd-logind[1554]: New session 1 of user core. Sep 9 00:30:22.169429 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:30:22.200856 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:30:22.244078 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:30:22.279668 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:30:22.329497 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:30:22.332776 systemd-logind[1554]: New session c1 of user core. Sep 9 00:30:22.574218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:22.576164 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:30:22.586314 systemd[1685]: Queued start job for default target default.target. Sep 9 00:30:22.590833 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:30:22.591774 systemd[1685]: Created slice app.slice - User Application Slice. Sep 9 00:30:22.591804 systemd[1685]: Reached target paths.target - Paths. Sep 9 00:30:22.591847 systemd[1685]: Reached target timers.target - Timers. Sep 9 00:30:22.593468 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:30:22.605049 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:30:22.605171 systemd[1685]: Reached target sockets.target - Sockets. Sep 9 00:30:22.605206 systemd[1685]: Reached target basic.target - Basic System. Sep 9 00:30:22.605245 systemd[1685]: Reached target default.target - Main User Target. Sep 9 00:30:22.605277 systemd[1685]: Startup finished in 260ms. Sep 9 00:30:22.605748 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:30:22.620509 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:30:22.656380 systemd[1]: Startup finished in 4.152s (kernel) + 16.829s (initrd) + 6.962s (userspace) = 27.943s. Sep 9 00:30:22.720688 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:40536.service - OpenSSH per-connection server daemon (10.0.0.1:40536). Sep 9 00:30:22.800751 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 40536 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:22.802479 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:22.808068 systemd-logind[1554]: New session 2 of user core. Sep 9 00:30:22.815482 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:30:22.872563 sshd[1713]: Connection closed by 10.0.0.1 port 40536 Sep 9 00:30:22.873437 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:22.902060 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:40536.service: Deactivated successfully. Sep 9 00:30:22.903813 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:30:22.904534 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:30:22.907434 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:40550.service - OpenSSH per-connection server daemon (10.0.0.1:40550). Sep 9 00:30:22.908168 systemd-logind[1554]: Removed session 2. Sep 9 00:30:22.970040 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 40550 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:22.971494 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:22.976752 systemd-logind[1554]: New session 3 of user core. Sep 9 00:30:22.983472 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:30:23.033733 sshd[1721]: Connection closed by 10.0.0.1 port 40550 Sep 9 00:30:23.034099 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:23.046967 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:40550.service: Deactivated successfully. Sep 9 00:30:23.048939 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:30:23.049860 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:30:23.052939 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:40562.service - OpenSSH per-connection server daemon (10.0.0.1:40562). Sep 9 00:30:23.053611 systemd-logind[1554]: Removed session 3. Sep 9 00:30:23.170040 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 40562 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:23.171894 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:23.176800 systemd-logind[1554]: New session 4 of user core. Sep 9 00:30:23.185616 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:30:23.303035 sshd[1730]: Connection closed by 10.0.0.1 port 40562 Sep 9 00:30:23.303493 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:23.316578 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:40562.service: Deactivated successfully. Sep 9 00:30:23.318717 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:30:23.319474 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:30:23.323143 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:40570.service - OpenSSH per-connection server daemon (10.0.0.1:40570). Sep 9 00:30:23.323845 systemd-logind[1554]: Removed session 4. Sep 9 00:30:23.421739 kubelet[1696]: E0909 00:30:23.421600 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:30:23.426844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:30:23.427037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:30:23.427460 systemd[1]: kubelet.service: Consumed 2.589s CPU time, 264.5M memory peak. Sep 9 00:30:23.431944 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 40570 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:23.434254 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:23.440128 systemd-logind[1554]: New session 5 of user core. Sep 9 00:30:23.456667 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:30:23.519292 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:30:23.519767 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:30:23.546375 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 9 00:30:23.548522 sshd[1739]: Connection closed by 10.0.0.1 port 40570 Sep 9 00:30:23.548973 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:23.558048 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:40570.service: Deactivated successfully. Sep 9 00:30:23.559799 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:30:23.560798 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:30:23.564294 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:40578.service - OpenSSH per-connection server daemon (10.0.0.1:40578). Sep 9 00:30:23.565085 systemd-logind[1554]: Removed session 5. Sep 9 00:30:23.630514 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 40578 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:23.632418 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:23.637355 systemd-logind[1554]: New session 6 of user core. Sep 9 00:30:23.648686 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:30:23.704931 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:30:23.705332 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:30:23.712766 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 9 00:30:23.719014 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:30:23.719312 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:30:23.728844 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:30:23.780658 augenrules[1772]: No rules Sep 9 00:30:23.782308 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:30:23.782619 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:30:23.783961 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 9 00:30:23.785530 sshd[1748]: Connection closed by 10.0.0.1 port 40578 Sep 9 00:30:23.785879 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 9 00:30:23.798648 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:40578.service: Deactivated successfully. Sep 9 00:30:23.800643 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:30:23.801451 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:30:23.804719 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:40590.service - OpenSSH per-connection server daemon (10.0.0.1:40590). Sep 9 00:30:23.805489 systemd-logind[1554]: Removed session 6. Sep 9 00:30:23.868181 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 40590 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:30:23.869828 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:30:23.874488 systemd-logind[1554]: New session 7 of user core. Sep 9 00:30:23.889490 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:30:23.942201 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:30:23.942538 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:30:24.778499 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:30:24.868083 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:30:25.458873 dockerd[1805]: time="2025-09-09T00:30:25.458790939Z" level=info msg="Starting up" Sep 9 00:30:25.460733 dockerd[1805]: time="2025-09-09T00:30:25.460707283Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:30:26.656432 dockerd[1805]: time="2025-09-09T00:30:26.656302912Z" level=info msg="Loading containers: start." Sep 9 00:30:26.670548 kernel: Initializing XFRM netlink socket Sep 9 00:30:26.980595 systemd-networkd[1506]: docker0: Link UP Sep 9 00:30:26.985571 dockerd[1805]: time="2025-09-09T00:30:26.985512317Z" level=info msg="Loading containers: done." Sep 9 00:30:27.006118 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck238056769-merged.mount: Deactivated successfully. Sep 9 00:30:27.007974 dockerd[1805]: time="2025-09-09T00:30:27.007915321Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:30:27.008056 dockerd[1805]: time="2025-09-09T00:30:27.008033392Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:30:27.008199 dockerd[1805]: time="2025-09-09T00:30:27.008170840Z" level=info msg="Initializing buildkit" Sep 9 00:30:27.068834 dockerd[1805]: time="2025-09-09T00:30:27.068771618Z" level=info msg="Completed buildkit initialization" Sep 9 00:30:27.075752 dockerd[1805]: time="2025-09-09T00:30:27.075693970Z" level=info msg="Daemon has completed initialization" Sep 9 00:30:27.075896 dockerd[1805]: time="2025-09-09T00:30:27.075784039Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:30:27.076154 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:30:28.263178 containerd[1582]: time="2025-09-09T00:30:28.263112383Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:30:29.557476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720175017.mount: Deactivated successfully. Sep 9 00:30:31.671736 containerd[1582]: time="2025-09-09T00:30:31.671654516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:31.672479 containerd[1582]: time="2025-09-09T00:30:31.672411746Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:30:31.673767 containerd[1582]: time="2025-09-09T00:30:31.673708248Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:31.677726 containerd[1582]: time="2025-09-09T00:30:31.677646914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:31.679160 containerd[1582]: time="2025-09-09T00:30:31.679110930Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 3.415949565s" Sep 9 00:30:31.679220 containerd[1582]: time="2025-09-09T00:30:31.679160663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:30:31.679904 containerd[1582]: time="2025-09-09T00:30:31.679854484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:30:33.664176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:30:33.666039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:30:34.262368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:34.282745 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:30:34.402529 kubelet[2081]: E0909 00:30:34.402466 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:30:34.409793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:30:34.409997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:30:34.410498 systemd[1]: kubelet.service: Consumed 376ms CPU time, 110.4M memory peak. Sep 9 00:30:35.115578 containerd[1582]: time="2025-09-09T00:30:35.115460281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:35.116614 containerd[1582]: time="2025-09-09T00:30:35.116556587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:30:35.118745 containerd[1582]: time="2025-09-09T00:30:35.118689718Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:35.122678 containerd[1582]: time="2025-09-09T00:30:35.122589912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:35.123475 containerd[1582]: time="2025-09-09T00:30:35.123427934Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 3.443537813s" Sep 9 00:30:35.123475 containerd[1582]: time="2025-09-09T00:30:35.123473168Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:30:35.124078 containerd[1582]: time="2025-09-09T00:30:35.124044901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:30:36.667130 containerd[1582]: time="2025-09-09T00:30:36.667040719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:36.693669 containerd[1582]: time="2025-09-09T00:30:36.693623511Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:30:36.731012 containerd[1582]: time="2025-09-09T00:30:36.730933210Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:36.774024 containerd[1582]: time="2025-09-09T00:30:36.773944892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:36.774909 containerd[1582]: time="2025-09-09T00:30:36.774878222Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.650801402s" Sep 9 00:30:36.774996 containerd[1582]: time="2025-09-09T00:30:36.774911074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:30:36.775617 containerd[1582]: time="2025-09-09T00:30:36.775408337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:30:37.889177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179246214.mount: Deactivated successfully. Sep 9 00:30:38.744814 containerd[1582]: time="2025-09-09T00:30:38.744741469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:38.745561 containerd[1582]: time="2025-09-09T00:30:38.745499370Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:30:38.746601 containerd[1582]: time="2025-09-09T00:30:38.746550021Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:38.751314 containerd[1582]: time="2025-09-09T00:30:38.751200943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:38.751877 containerd[1582]: time="2025-09-09T00:30:38.751809053Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 1.976366984s" Sep 9 00:30:38.751877 containerd[1582]: time="2025-09-09T00:30:38.751864437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:30:38.752509 containerd[1582]: time="2025-09-09T00:30:38.752452340Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:30:39.311503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606106272.mount: Deactivated successfully. Sep 9 00:30:40.894368 containerd[1582]: time="2025-09-09T00:30:40.894275783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:40.895464 containerd[1582]: time="2025-09-09T00:30:40.895433835Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:30:40.897087 containerd[1582]: time="2025-09-09T00:30:40.897034747Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:40.899918 containerd[1582]: time="2025-09-09T00:30:40.899882959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:40.900966 containerd[1582]: time="2025-09-09T00:30:40.900930704Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.148430434s" Sep 9 00:30:40.901011 containerd[1582]: time="2025-09-09T00:30:40.900964928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:30:40.901689 containerd[1582]: time="2025-09-09T00:30:40.901644683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:30:41.732487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307656882.mount: Deactivated successfully. Sep 9 00:30:41.739658 containerd[1582]: time="2025-09-09T00:30:41.739603504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:30:41.740653 containerd[1582]: time="2025-09-09T00:30:41.740613108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:30:41.742187 containerd[1582]: time="2025-09-09T00:30:41.742144139Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:30:41.744493 containerd[1582]: time="2025-09-09T00:30:41.744426299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:30:41.745076 containerd[1582]: time="2025-09-09T00:30:41.745027066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 843.337529ms" Sep 9 00:30:41.745140 containerd[1582]: time="2025-09-09T00:30:41.745076278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:30:41.745854 containerd[1582]: time="2025-09-09T00:30:41.745610971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:30:43.457796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149147568.mount: Deactivated successfully. Sep 9 00:30:44.414380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:30:44.416278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:30:44.616512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:44.620632 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:30:45.512851 kubelet[2177]: E0909 00:30:45.452991 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:30:45.457669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:30:45.457883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:30:45.458357 systemd[1]: kubelet.service: Consumed 276ms CPU time, 108.6M memory peak. Sep 9 00:30:49.892067 containerd[1582]: time="2025-09-09T00:30:49.891986773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:49.933273 containerd[1582]: time="2025-09-09T00:30:49.933186187Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:30:49.979162 containerd[1582]: time="2025-09-09T00:30:49.979073812Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:50.054557 containerd[1582]: time="2025-09-09T00:30:50.054475858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:30:50.055945 containerd[1582]: time="2025-09-09T00:30:50.055884911Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.310201895s" Sep 9 00:30:50.055945 containerd[1582]: time="2025-09-09T00:30:50.055930145Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:30:54.618509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:54.618703 systemd[1]: kubelet.service: Consumed 276ms CPU time, 108.6M memory peak. Sep 9 00:30:54.621347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:30:54.650997 systemd[1]: Reload requested from client PID 2260 ('systemctl') (unit session-7.scope)... Sep 9 00:30:54.651018 systemd[1]: Reloading... Sep 9 00:30:54.757368 zram_generator::config[2306]: No configuration found. Sep 9 00:30:55.076552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:30:55.199443 systemd[1]: Reloading finished in 547 ms. Sep 9 00:30:55.270301 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:30:55.270423 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:30:55.270737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:55.270780 systemd[1]: kubelet.service: Consumed 178ms CPU time, 98.3M memory peak. Sep 9 00:30:55.272641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:30:55.463570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:30:55.480743 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:30:55.542853 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:30:55.542853 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:30:55.542853 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:30:55.542853 kubelet[2351]: I0909 00:30:55.542714 2351 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:30:55.823909 kubelet[2351]: I0909 00:30:55.823724 2351 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:30:55.823909 kubelet[2351]: I0909 00:30:55.823778 2351 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:30:55.824217 kubelet[2351]: I0909 00:30:55.824190 2351 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:30:55.863084 kubelet[2351]: E0909 00:30:55.862997 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:30:55.863876 kubelet[2351]: I0909 00:30:55.863802 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:30:55.871639 kubelet[2351]: I0909 00:30:55.871572 2351 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:30:55.879709 kubelet[2351]: I0909 00:30:55.879657 2351 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:30:55.880097 kubelet[2351]: I0909 00:30:55.880030 2351 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:30:55.880318 kubelet[2351]: I0909 00:30:55.880077 2351 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:30:55.880480 kubelet[2351]: I0909 00:30:55.880329 2351 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:30:55.880480 kubelet[2351]: I0909 00:30:55.880365 2351 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:30:55.881352 kubelet[2351]: I0909 00:30:55.881319 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:30:55.883422 kubelet[2351]: I0909 00:30:55.883384 2351 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:30:55.883467 kubelet[2351]: I0909 00:30:55.883432 2351 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:30:55.883594 kubelet[2351]: I0909 00:30:55.883567 2351 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:30:55.883653 kubelet[2351]: I0909 00:30:55.883634 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:30:55.889709 kubelet[2351]: E0909 00:30:55.889562 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:30:55.889709 kubelet[2351]: E0909 00:30:55.889644 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:30:55.890073 kubelet[2351]: I0909 00:30:55.890045 2351 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:30:55.890861 kubelet[2351]: I0909 00:30:55.890817 2351 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:30:55.892058 kubelet[2351]: W0909 00:30:55.892023 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:30:55.895975 kubelet[2351]: I0909 00:30:55.895951 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:30:55.896037 kubelet[2351]: I0909 00:30:55.896006 2351 server.go:1289] "Started kubelet" Sep 9 00:30:55.904388 kubelet[2351]: I0909 00:30:55.903683 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:30:55.905301 kubelet[2351]: I0909 00:30:55.905246 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:30:55.906029 kubelet[2351]: E0909 00:30:55.905968 2351 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:30:55.906188 kubelet[2351]: I0909 00:30:55.906074 2351 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:30:55.907439 kubelet[2351]: I0909 00:30:55.907330 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:30:55.956323 kubelet[2351]: E0909 00:30:55.904958 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375d954d122d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:30:55.895970518 +0000 UTC m=+0.392830522,LastTimestamp:2025-09-09 00:30:55.895970518 +0000 UTC m=+0.392830522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:30:55.958565 kubelet[2351]: I0909 00:30:55.958298 2351 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:30:55.960049 kubelet[2351]: E0909 00:30:55.959942 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:30:55.960448 kubelet[2351]: E0909 00:30:55.960404 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Sep 9 00:30:55.961153 kubelet[2351]: I0909 00:30:55.961126 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:30:55.961608 kubelet[2351]: I0909 00:30:55.961574 2351 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:30:55.961809 kubelet[2351]: I0909 00:30:55.961786 2351 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:30:55.962324 kubelet[2351]: E0909 00:30:55.962290 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:30:55.963520 kubelet[2351]: I0909 00:30:55.963497 2351 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:30:55.964637 kubelet[2351]: I0909 00:30:55.964598 2351 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:30:55.964637 kubelet[2351]: I0909 00:30:55.964627 2351 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:30:55.964816 kubelet[2351]: I0909 00:30:55.964786 2351 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:30:55.989468 kubelet[2351]: I0909 00:30:55.989419 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:30:55.989468 kubelet[2351]: I0909 00:30:55.989455 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:30:55.989626 kubelet[2351]: I0909 00:30:55.989482 2351 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:30:55.992303 kubelet[2351]: I0909 00:30:55.992241 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:30:55.993864 kubelet[2351]: I0909 00:30:55.993819 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:30:55.993909 kubelet[2351]: I0909 00:30:55.993889 2351 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:30:55.993956 kubelet[2351]: I0909 00:30:55.993941 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:30:55.993985 kubelet[2351]: I0909 00:30:55.993961 2351 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:30:55.994166 kubelet[2351]: E0909 00:30:55.994041 2351 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:30:55.994666 kubelet[2351]: E0909 00:30:55.994623 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:30:56.060302 kubelet[2351]: E0909 00:30:56.060240 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:30:56.094914 kubelet[2351]: E0909 00:30:56.094738 2351 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:30:56.161153 kubelet[2351]: E0909 00:30:56.161083 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:30:56.161694 kubelet[2351]: E0909 00:30:56.161660 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Sep 9 00:30:56.262168 kubelet[2351]: E0909 00:30:56.262099 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:30:56.295461 kubelet[2351]: E0909 00:30:56.295399 2351 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:30:56.363256 kubelet[2351]: E0909 00:30:56.363120 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:30:56.391964 kubelet[2351]: I0909 00:30:56.391895 2351 policy_none.go:49] "None policy: Start" Sep 9 00:30:56.391964 kubelet[2351]: I0909 00:30:56.391953 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:30:56.391964 kubelet[2351]: I0909 00:30:56.391982 2351 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:30:56.400923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:30:56.413819 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:30:56.417862 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:30:56.440816 kubelet[2351]: E0909 00:30:56.440761 2351 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:30:56.441182 kubelet[2351]: I0909 00:30:56.441163 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:30:56.441259 kubelet[2351]: I0909 00:30:56.441187 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:30:56.441634 kubelet[2351]: I0909 00:30:56.441609 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:30:56.442767 kubelet[2351]: E0909 00:30:56.442725 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:30:56.442830 kubelet[2351]: E0909 00:30:56.442796 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:30:56.543196 kubelet[2351]: I0909 00:30:56.543152 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:30:56.544067 kubelet[2351]: E0909 00:30:56.544004 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:30:56.562822 kubelet[2351]: E0909 00:30:56.562776 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Sep 9 00:30:56.715901 systemd[1]: Created slice kubepods-burstable-podbd75bd142ccdf0d541ed78b66f65767a.slice - libcontainer container kubepods-burstable-podbd75bd142ccdf0d541ed78b66f65767a.slice. Sep 9 00:30:56.729829 kubelet[2351]: E0909 00:30:56.729780 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:30:56.732505 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:30:56.746186 kubelet[2351]: I0909 00:30:56.746148 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:30:56.746691 kubelet[2351]: E0909 00:30:56.746637 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:30:56.749548 kubelet[2351]: E0909 00:30:56.749508 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:30:56.752262 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:30:56.754833 kubelet[2351]: E0909 00:30:56.754787 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:30:56.766298 kubelet[2351]: I0909 00:30:56.766162 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:30:56.766493 kubelet[2351]: I0909 00:30:56.766438 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:30:56.766550 kubelet[2351]: I0909 00:30:56.766494 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:30:56.766654 kubelet[2351]: I0909 00:30:56.766587 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:30:56.766729 kubelet[2351]: I0909 00:30:56.766698 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:30:56.766764 kubelet[2351]: I0909 00:30:56.766755 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:30:56.766789 kubelet[2351]: I0909 00:30:56.766773 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:30:56.766815 kubelet[2351]: I0909 00:30:56.766793 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:30:56.766855 kubelet[2351]: I0909 00:30:56.766812 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:30:56.790179 kubelet[2351]: E0909 00:30:56.790115 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:30:56.836331 kubelet[2351]: E0909 00:30:56.836232 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:30:57.031471 kubelet[2351]: E0909 00:30:57.031264 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:57.032548 containerd[1582]: time="2025-09-09T00:30:57.032472160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bd75bd142ccdf0d541ed78b66f65767a,Namespace:kube-system,Attempt:0,}" Sep 9 00:30:57.050814 kubelet[2351]: E0909 00:30:57.050745 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:57.051637 containerd[1582]: time="2025-09-09T00:30:57.051569124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:30:57.055895 kubelet[2351]: E0909 00:30:57.055859 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:30:57.056387 containerd[1582]: time="2025-09-09T00:30:57.056318850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:30:57.148749 kubelet[2351]: I0909 00:30:57.148657 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:30:57.150481 kubelet[2351]: E0909 00:30:57.150444 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:30:57.157158 kubelet[2351]: E0909 00:30:57.157097 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:30:57.237133 kubelet[2351]: E0909 00:30:57.237063 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:30:57.364471 kubelet[2351]: E0909 00:30:57.364219 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Sep 9 00:30:57.953384 kubelet[2351]: I0909 00:30:57.953322 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:30:57.954020 kubelet[2351]: E0909 00:30:57.953960 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:30:57.990170 kubelet[2351]: E0909 00:30:57.990073 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:30:58.932525 kubelet[2351]: E0909 00:30:58.932466 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:30:58.965609 kubelet[2351]: E0909 00:30:58.965554 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="3.2s" Sep 9 00:30:59.509997 kubelet[2351]: E0909 00:30:59.509898 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:30:59.536759 kubelet[2351]: E0909 00:30:59.536707 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.142:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:30:59.556156 kubelet[2351]: I0909 00:30:59.556125 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:30:59.556639 kubelet[2351]: E0909 00:30:59.556602 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:30:59.681452 kubelet[2351]: E0909 00:30:59.681297 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375d954d122d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:30:55.895970518 +0000 UTC m=+0.392830522,LastTimestamp:2025-09-09 00:30:55.895970518 +0000 UTC m=+0.392830522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:31:00.330005 kubelet[2351]: E0909 00:31:00.329943 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:31:01.787468 containerd[1582]: time="2025-09-09T00:31:01.787414431Z" level=info msg="connecting to shim 06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909" address="unix:///run/containerd/s/fc1c416244a58c1a3d3cfb558d8ae8245bd05938b31f6818c71e4e0ac52609f7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:01.894546 systemd[1]: Started cri-containerd-06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909.scope - libcontainer container 06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909. Sep 9 00:31:02.005530 kubelet[2351]: E0909 00:31:02.005465 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:31:02.167079 kubelet[2351]: E0909 00:31:02.167023 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="6.4s" Sep 9 00:31:02.537386 containerd[1582]: time="2025-09-09T00:31:02.537259007Z" level=info msg="connecting to shim d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c" address="unix:///run/containerd/s/8ac056aa117a0012509fb7e18cdc0af406e455963b2811febc6fa3d63ebf136f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:02.566488 systemd[1]: Started cri-containerd-d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c.scope - libcontainer container d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c. Sep 9 00:31:02.679140 containerd[1582]: time="2025-09-09T00:31:02.679081620Z" level=info msg="connecting to shim c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb" address="unix:///run/containerd/s/c84644d45ce48d0f942b8ce406c644a7345395ce861271458c4f4a873678bc2c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:02.704501 systemd[1]: Started cri-containerd-c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb.scope - libcontainer container c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb. Sep 9 00:31:02.726168 kubelet[2351]: E0909 00:31:02.726104 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.142:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:31:02.744102 containerd[1582]: time="2025-09-09T00:31:02.743497056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bd75bd142ccdf0d541ed78b66f65767a,Namespace:kube-system,Attempt:0,} returns sandbox id \"06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909\"" Sep 9 00:31:02.745464 kubelet[2351]: E0909 00:31:02.745420 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:02.763600 kubelet[2351]: I0909 00:31:02.758072 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:31:02.763600 kubelet[2351]: E0909 00:31:02.758579 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Sep 9 00:31:02.778138 containerd[1582]: time="2025-09-09T00:31:02.778048798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c\"" Sep 9 00:31:02.779050 kubelet[2351]: E0909 00:31:02.779020 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:02.780715 containerd[1582]: time="2025-09-09T00:31:02.780669118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb\"" Sep 9 00:31:02.780856 containerd[1582]: time="2025-09-09T00:31:02.780688033Z" level=info msg="CreateContainer within sandbox \"06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:31:02.781573 kubelet[2351]: E0909 00:31:02.781548 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:02.785055 containerd[1582]: time="2025-09-09T00:31:02.785023950Z" level=info msg="CreateContainer within sandbox \"d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:31:02.789092 containerd[1582]: time="2025-09-09T00:31:02.788954785Z" level=info msg="CreateContainer within sandbox \"c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:31:02.800201 containerd[1582]: time="2025-09-09T00:31:02.800145795Z" level=info msg="Container f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:02.804953 containerd[1582]: time="2025-09-09T00:31:02.804913462Z" level=info msg="Container 172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:02.813109 containerd[1582]: time="2025-09-09T00:31:02.812985944Z" level=info msg="Container 4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:02.817424 containerd[1582]: time="2025-09-09T00:31:02.817372667Z" level=info msg="CreateContainer within sandbox \"06ceb856b319d08ef6ee1e0c388491d514af23dc673f922165bcd5ae45608909\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929\"" Sep 9 00:31:02.818137 containerd[1582]: time="2025-09-09T00:31:02.818101745Z" level=info msg="StartContainer for \"f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929\"" Sep 9 00:31:02.819240 containerd[1582]: time="2025-09-09T00:31:02.819196900Z" level=info msg="connecting to shim f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929" address="unix:///run/containerd/s/fc1c416244a58c1a3d3cfb558d8ae8245bd05938b31f6818c71e4e0ac52609f7" protocol=ttrpc version=3 Sep 9 00:31:02.823165 containerd[1582]: time="2025-09-09T00:31:02.823110182Z" level=info msg="CreateContainer within sandbox \"d4e2d55eaaa9242a2d49c69c634a8c3b4e6de6f22ec99c49c367158488faa35c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021\"" Sep 9 00:31:02.823860 containerd[1582]: time="2025-09-09T00:31:02.823807920Z" level=info msg="StartContainer for \"172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021\"" Sep 9 00:31:02.825016 containerd[1582]: time="2025-09-09T00:31:02.824987085Z" level=info msg="connecting to shim 172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021" address="unix:///run/containerd/s/8ac056aa117a0012509fb7e18cdc0af406e455963b2811febc6fa3d63ebf136f" protocol=ttrpc version=3 Sep 9 00:31:02.839481 systemd[1]: Started cri-containerd-f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929.scope - libcontainer container f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929. Sep 9 00:31:02.844688 systemd[1]: Started cri-containerd-172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021.scope - libcontainer container 172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021. Sep 9 00:31:02.968283 containerd[1582]: time="2025-09-09T00:31:02.968211688Z" level=info msg="CreateContainer within sandbox \"c5f917ba05d603ce967e12c17ed5784a261b26d992817b508d4fd6ad392c28eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3\"" Sep 9 00:31:02.969937 containerd[1582]: time="2025-09-09T00:31:02.969813298Z" level=info msg="StartContainer for \"4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3\"" Sep 9 00:31:02.971593 containerd[1582]: time="2025-09-09T00:31:02.971552649Z" level=info msg="StartContainer for \"172b271d5d3eddd7a86056696d646ad54e5d80aa79f0b18d989aca6dad2d4021\" returns successfully" Sep 9 00:31:02.971920 containerd[1582]: time="2025-09-09T00:31:02.971853452Z" level=info msg="StartContainer for \"f4f445386443e3a9aa7858b23d7fcdf14c2e23393efabc6209ab3396f2347929\" returns successfully" Sep 9 00:31:02.973095 containerd[1582]: time="2025-09-09T00:31:02.972444317Z" level=info msg="connecting to shim 4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3" address="unix:///run/containerd/s/c84644d45ce48d0f942b8ce406c644a7345395ce861271458c4f4a873678bc2c" protocol=ttrpc version=3 Sep 9 00:31:03.001746 systemd[1]: Started cri-containerd-4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3.scope - libcontainer container 4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3. Sep 9 00:31:03.015813 kubelet[2351]: E0909 00:31:03.015772 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:03.016278 kubelet[2351]: E0909 00:31:03.016071 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:03.019355 kubelet[2351]: E0909 00:31:03.018675 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:03.019355 kubelet[2351]: E0909 00:31:03.018782 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:03.150646 containerd[1582]: time="2025-09-09T00:31:03.150033578Z" level=info msg="StartContainer for \"4491008fab3d6358edb2ab8c16ae1a5578da941dfe55c0056164a1e69fc1b4c3\" returns successfully" Sep 9 00:31:03.781524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126042290.mount: Deactivated successfully. Sep 9 00:31:04.026510 kubelet[2351]: E0909 00:31:04.026475 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:04.026930 kubelet[2351]: E0909 00:31:04.026640 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:04.027783 kubelet[2351]: E0909 00:31:04.027760 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:04.027907 kubelet[2351]: E0909 00:31:04.027888 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:04.713259 update_engine[1562]: I20250909 00:31:04.713142 1562 update_attempter.cc:509] Updating boot flags... Sep 9 00:31:04.889166 kubelet[2351]: E0909 00:31:04.889124 2351 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:31:04.894398 kubelet[2351]: I0909 00:31:04.894371 2351 apiserver.go:52] "Watching apiserver" Sep 9 00:31:04.964663 kubelet[2351]: I0909 00:31:04.964544 2351 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:31:05.031447 kubelet[2351]: E0909 00:31:05.031291 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:05.035578 kubelet[2351]: E0909 00:31:05.035401 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:05.248634 kubelet[2351]: E0909 00:31:05.248496 2351 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:31:05.696560 kubelet[2351]: E0909 00:31:05.696512 2351 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:31:06.031791 kubelet[2351]: E0909 00:31:06.031661 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:06.032229 kubelet[2351]: E0909 00:31:06.031851 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:06.443512 kubelet[2351]: E0909 00:31:06.443479 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:31:06.805650 kubelet[2351]: E0909 00:31:06.805480 2351 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:31:07.033392 kubelet[2351]: E0909 00:31:07.033322 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:07.033810 kubelet[2351]: E0909 00:31:07.033500 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:07.398287 kubelet[2351]: E0909 00:31:07.398228 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:07.398560 kubelet[2351]: E0909 00:31:07.398427 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:07.859780 kubelet[2351]: E0909 00:31:07.859747 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:31:07.859947 kubelet[2351]: E0909 00:31:07.859896 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:08.571053 kubelet[2351]: E0909 00:31:08.570990 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:31:09.160382 kubelet[2351]: I0909 00:31:09.160323 2351 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:31:09.167991 kubelet[2351]: I0909 00:31:09.167949 2351 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:31:09.260217 kubelet[2351]: I0909 00:31:09.260164 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:09.269926 kubelet[2351]: I0909 00:31:09.269885 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:09.271086 kubelet[2351]: E0909 00:31:09.270665 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:09.274250 kubelet[2351]: I0909 00:31:09.274215 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:09.275795 kubelet[2351]: E0909 00:31:09.274522 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:09.278823 kubelet[2351]: E0909 00:31:09.278798 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:09.314714 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-7.scope)... Sep 9 00:31:09.314729 systemd[1]: Reloading... Sep 9 00:31:09.418402 zram_generator::config[2700]: No configuration found. Sep 9 00:31:09.510147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:31:09.677447 systemd[1]: Reloading finished in 362 ms. Sep 9 00:31:09.711404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:31:09.724934 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:31:09.725323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:31:09.725409 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 133.3M memory peak. Sep 9 00:31:09.730925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:31:10.030365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:31:10.055643 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:31:10.207658 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:31:10.207658 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:31:10.210962 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:31:10.210962 kubelet[2739]: I0909 00:31:10.209881 2739 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:31:10.224410 kubelet[2739]: I0909 00:31:10.224318 2739 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:31:10.224410 kubelet[2739]: I0909 00:31:10.224375 2739 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:31:10.224766 kubelet[2739]: I0909 00:31:10.224694 2739 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:31:10.226606 kubelet[2739]: I0909 00:31:10.226439 2739 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:31:10.240909 kubelet[2739]: I0909 00:31:10.239606 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:31:10.256053 kubelet[2739]: I0909 00:31:10.256015 2739 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:31:10.266575 kubelet[2739]: I0909 00:31:10.266512 2739 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:31:10.267638 kubelet[2739]: I0909 00:31:10.267588 2739 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:31:10.268375 kubelet[2739]: I0909 00:31:10.267631 2739 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:31:10.268375 kubelet[2739]: I0909 00:31:10.268057 2739 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:31:10.268375 kubelet[2739]: I0909 00:31:10.268071 2739 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:31:10.268375 kubelet[2739]: I0909 00:31:10.268133 2739 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:31:10.268577 kubelet[2739]: I0909 00:31:10.268452 2739 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:31:10.268577 kubelet[2739]: I0909 00:31:10.268469 2739 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:31:10.268577 kubelet[2739]: I0909 00:31:10.268513 2739 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:31:10.268577 kubelet[2739]: I0909 00:31:10.268534 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:31:10.280676 kubelet[2739]: I0909 00:31:10.278666 2739 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:31:10.280676 kubelet[2739]: I0909 00:31:10.280210 2739 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:31:10.292652 kubelet[2739]: I0909 00:31:10.290205 2739 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:31:10.292652 kubelet[2739]: I0909 00:31:10.290303 2739 server.go:1289] "Started kubelet" Sep 9 00:31:10.305895 kubelet[2739]: I0909 00:31:10.304802 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:31:10.307374 kubelet[2739]: I0909 00:31:10.307299 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:31:10.319285 kubelet[2739]: I0909 00:31:10.317940 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:31:10.326457 kubelet[2739]: I0909 00:31:10.322313 2739 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:31:10.326457 kubelet[2739]: I0909 00:31:10.322429 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:31:10.327489 kubelet[2739]: I0909 00:31:10.327467 2739 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:31:10.330854 kubelet[2739]: I0909 00:31:10.328266 2739 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:31:10.331370 kubelet[2739]: I0909 00:31:10.331332 2739 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:31:10.331914 kubelet[2739]: I0909 00:31:10.331890 2739 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:31:10.333068 kubelet[2739]: I0909 00:31:10.333048 2739 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:31:10.333272 kubelet[2739]: I0909 00:31:10.333248 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:31:10.339582 kubelet[2739]: E0909 00:31:10.339544 2739 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:31:10.344612 kubelet[2739]: I0909 00:31:10.343987 2739 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:31:10.377377 kubelet[2739]: I0909 00:31:10.370378 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:31:10.377377 kubelet[2739]: I0909 00:31:10.373136 2739 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:31:10.377377 kubelet[2739]: I0909 00:31:10.373166 2739 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:31:10.377377 kubelet[2739]: I0909 00:31:10.373194 2739 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:31:10.377377 kubelet[2739]: I0909 00:31:10.373205 2739 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:31:10.377377 kubelet[2739]: E0909 00:31:10.373273 2739 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:31:10.467452 kubelet[2739]: I0909 00:31:10.467410 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:31:10.467452 kubelet[2739]: I0909 00:31:10.467438 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:31:10.467452 kubelet[2739]: I0909 00:31:10.467463 2739 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:31:10.467703 kubelet[2739]: I0909 00:31:10.467674 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:31:10.467745 kubelet[2739]: I0909 00:31:10.467694 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:31:10.467745 kubelet[2739]: I0909 00:31:10.467716 2739 policy_none.go:49] "None policy: Start" Sep 9 00:31:10.467745 kubelet[2739]: I0909 00:31:10.467728 2739 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:31:10.467745 kubelet[2739]: I0909 00:31:10.467741 2739 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:31:10.468964 kubelet[2739]: I0909 00:31:10.467892 2739 state_mem.go:75] "Updated machine memory state" Sep 9 00:31:10.473693 kubelet[2739]: E0909 00:31:10.473556 2739 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:31:10.479054 kubelet[2739]: E0909 00:31:10.478777 2739 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:31:10.480874 kubelet[2739]: I0909 00:31:10.479327 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:31:10.480874 kubelet[2739]: I0909 00:31:10.479733 2739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:31:10.480874 kubelet[2739]: I0909 00:31:10.480679 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:31:10.483367 kubelet[2739]: E0909 00:31:10.482704 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:31:10.596026 kubelet[2739]: I0909 00:31:10.595871 2739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:31:10.612925 kubelet[2739]: I0909 00:31:10.611854 2739 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:31:10.612925 kubelet[2739]: I0909 00:31:10.611975 2739 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:31:10.680982 kubelet[2739]: I0909 00:31:10.679725 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:10.680982 kubelet[2739]: I0909 00:31:10.679850 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.680982 kubelet[2739]: I0909 00:31:10.680627 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:10.711623 kubelet[2739]: E0909 00:31:10.710891 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:10.718401 kubelet[2739]: E0909 00:31:10.717760 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:10.718401 kubelet[2739]: E0909 00:31:10.717956 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.732627 kubelet[2739]: I0909 00:31:10.732115 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:10.732627 kubelet[2739]: I0909 00:31:10.732190 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:10.732627 kubelet[2739]: I0909 00:31:10.732218 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.732627 kubelet[2739]: I0909 00:31:10.732252 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.732627 kubelet[2739]: I0909 00:31:10.732296 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.732939 kubelet[2739]: I0909 00:31:10.732325 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:10.732939 kubelet[2739]: I0909 00:31:10.732375 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd75bd142ccdf0d541ed78b66f65767a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bd75bd142ccdf0d541ed78b66f65767a\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:10.732939 kubelet[2739]: I0909 00:31:10.732395 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:10.732939 kubelet[2739]: I0909 00:31:10.732414 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:11.011693 kubelet[2739]: E0909 00:31:11.011614 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.018765 kubelet[2739]: E0909 00:31:11.018527 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.018765 kubelet[2739]: E0909 00:31:11.018649 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.270135 kubelet[2739]: I0909 00:31:11.269855 2739 apiserver.go:52] "Watching apiserver" Sep 9 00:31:11.328415 kubelet[2739]: I0909 00:31:11.328361 2739 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:31:11.419432 kubelet[2739]: I0909 00:31:11.419049 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:11.419432 kubelet[2739]: I0909 00:31:11.419381 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:11.420160 kubelet[2739]: I0909 00:31:11.420139 2739 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:11.434855 kubelet[2739]: E0909 00:31:11.434715 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:31:11.435275 kubelet[2739]: E0909 00:31:11.435214 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.440460 kubelet[2739]: E0909 00:31:11.440411 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:31:11.440650 kubelet[2739]: E0909 00:31:11.440602 2739 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:31:11.440819 kubelet[2739]: E0909 00:31:11.440781 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.441059 kubelet[2739]: E0909 00:31:11.440997 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:11.802387 kubelet[2739]: I0909 00:31:11.801936 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.801901358 podStartE2EDuration="2.801901358s" podCreationTimestamp="2025-09-09 00:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:11.47385016 +0000 UTC m=+1.396038344" watchObservedRunningTime="2025-09-09 00:31:11.801901358 +0000 UTC m=+1.724089542" Sep 9 00:31:11.802387 kubelet[2739]: I0909 00:31:11.802121 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.802109191 podStartE2EDuration="2.802109191s" podCreationTimestamp="2025-09-09 00:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:11.797743944 +0000 UTC m=+1.719932139" watchObservedRunningTime="2025-09-09 00:31:11.802109191 +0000 UTC m=+1.724297376" Sep 9 00:31:12.253299 kubelet[2739]: I0909 00:31:12.252961 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.252938107 podStartE2EDuration="3.252938107s" podCreationTimestamp="2025-09-09 00:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:11.952263765 +0000 UTC m=+1.874451949" watchObservedRunningTime="2025-09-09 00:31:12.252938107 +0000 UTC m=+2.175126291" Sep 9 00:31:12.422753 kubelet[2739]: E0909 00:31:12.422687 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:12.423731 kubelet[2739]: E0909 00:31:12.423314 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:12.423731 kubelet[2739]: E0909 00:31:12.423558 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:13.274890 kubelet[2739]: I0909 00:31:13.274816 2739 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:31:13.275163 containerd[1582]: time="2025-09-09T00:31:13.275121728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:31:13.275624 kubelet[2739]: I0909 00:31:13.275267 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:31:13.424205 kubelet[2739]: E0909 00:31:13.424152 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:13.879476 systemd[1]: Created slice kubepods-besteffort-pod6d76f35d_8c59_4a51_b351_26505f6c7e9c.slice - libcontainer container kubepods-besteffort-pod6d76f35d_8c59_4a51_b351_26505f6c7e9c.slice. Sep 9 00:31:13.962444 kubelet[2739]: I0909 00:31:13.962365 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d76f35d-8c59-4a51-b351-26505f6c7e9c-lib-modules\") pod \"kube-proxy-54764\" (UID: \"6d76f35d-8c59-4a51-b351-26505f6c7e9c\") " pod="kube-system/kube-proxy-54764" Sep 9 00:31:13.962444 kubelet[2739]: I0909 00:31:13.962437 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wfzg\" (UniqueName: \"kubernetes.io/projected/6d76f35d-8c59-4a51-b351-26505f6c7e9c-kube-api-access-8wfzg\") pod \"kube-proxy-54764\" (UID: \"6d76f35d-8c59-4a51-b351-26505f6c7e9c\") " pod="kube-system/kube-proxy-54764" Sep 9 00:31:13.962654 kubelet[2739]: I0909 00:31:13.962465 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d76f35d-8c59-4a51-b351-26505f6c7e9c-kube-proxy\") pod \"kube-proxy-54764\" (UID: \"6d76f35d-8c59-4a51-b351-26505f6c7e9c\") " pod="kube-system/kube-proxy-54764" Sep 9 00:31:13.962654 kubelet[2739]: I0909 00:31:13.962483 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d76f35d-8c59-4a51-b351-26505f6c7e9c-xtables-lock\") pod \"kube-proxy-54764\" (UID: \"6d76f35d-8c59-4a51-b351-26505f6c7e9c\") " pod="kube-system/kube-proxy-54764" Sep 9 00:31:14.189683 kubelet[2739]: E0909 00:31:14.189641 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:14.190639 containerd[1582]: time="2025-09-09T00:31:14.190596446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54764,Uid:6d76f35d-8c59-4a51-b351-26505f6c7e9c,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:14.214541 containerd[1582]: time="2025-09-09T00:31:14.214487086Z" level=info msg="connecting to shim 4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532" address="unix:///run/containerd/s/018b4e66a2e2f9643d38058a40fdf3d346e1c3e9f5f39e511be902dc3bca9b8f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:14.245505 systemd[1]: Started cri-containerd-4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532.scope - libcontainer container 4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532. Sep 9 00:31:14.278436 containerd[1582]: time="2025-09-09T00:31:14.278386449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54764,Uid:6d76f35d-8c59-4a51-b351-26505f6c7e9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532\"" Sep 9 00:31:14.279352 kubelet[2739]: E0909 00:31:14.279305 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:14.285692 containerd[1582]: time="2025-09-09T00:31:14.285631783Z" level=info msg="CreateContainer within sandbox \"4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:31:14.296882 containerd[1582]: time="2025-09-09T00:31:14.296815765Z" level=info msg="Container 6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:14.310073 containerd[1582]: time="2025-09-09T00:31:14.310009513Z" level=info msg="CreateContainer within sandbox \"4a9ff9059af0c97b01626bbe70d2f00d8a695b80636e543255c44cf2ac2bd532\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536\"" Sep 9 00:31:14.311387 containerd[1582]: time="2025-09-09T00:31:14.310678848Z" level=info msg="StartContainer for \"6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536\"" Sep 9 00:31:14.312348 containerd[1582]: time="2025-09-09T00:31:14.312313365Z" level=info msg="connecting to shim 6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536" address="unix:///run/containerd/s/018b4e66a2e2f9643d38058a40fdf3d346e1c3e9f5f39e511be902dc3bca9b8f" protocol=ttrpc version=3 Sep 9 00:31:14.338962 systemd[1]: Started cri-containerd-6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536.scope - libcontainer container 6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536. Sep 9 00:31:14.387846 systemd[1]: Created slice kubepods-besteffort-pod23cf6771_9542_4cfc_b325_4533c61c121a.slice - libcontainer container kubepods-besteffort-pod23cf6771_9542_4cfc_b325_4533c61c121a.slice. Sep 9 00:31:14.415130 containerd[1582]: time="2025-09-09T00:31:14.415015270Z" level=info msg="StartContainer for \"6147431ecbb6d2c00d877438856462fe2c21556bcdcdb23f0ea2218785690536\" returns successfully" Sep 9 00:31:14.429314 kubelet[2739]: E0909 00:31:14.429277 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:14.467828 kubelet[2739]: I0909 00:31:14.467134 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23cf6771-9542-4cfc-b325-4533c61c121a-var-lib-calico\") pod \"tigera-operator-755d956888-wkxsq\" (UID: \"23cf6771-9542-4cfc-b325-4533c61c121a\") " pod="tigera-operator/tigera-operator-755d956888-wkxsq" Sep 9 00:31:14.467828 kubelet[2739]: I0909 00:31:14.467197 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqqvq\" (UniqueName: \"kubernetes.io/projected/23cf6771-9542-4cfc-b325-4533c61c121a-kube-api-access-xqqvq\") pod \"tigera-operator-755d956888-wkxsq\" (UID: \"23cf6771-9542-4cfc-b325-4533c61c121a\") " pod="tigera-operator/tigera-operator-755d956888-wkxsq" Sep 9 00:31:14.692616 containerd[1582]: time="2025-09-09T00:31:14.692565451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wkxsq,Uid:23cf6771-9542-4cfc-b325-4533c61c121a,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:31:14.720228 containerd[1582]: time="2025-09-09T00:31:14.720061561Z" level=info msg="connecting to shim bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d" address="unix:///run/containerd/s/8c6eec9a192527dc35936f5adff74f1d2b93b4ad813fbfbfb8f22d568233a0db" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:14.751546 systemd[1]: Started cri-containerd-bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d.scope - libcontainer container bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d. Sep 9 00:31:14.798103 containerd[1582]: time="2025-09-09T00:31:14.798051054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wkxsq,Uid:23cf6771-9542-4cfc-b325-4533c61c121a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d\"" Sep 9 00:31:14.799713 containerd[1582]: time="2025-09-09T00:31:14.799684019Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:31:17.902800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212067089.mount: Deactivated successfully. Sep 9 00:31:18.501123 kubelet[2739]: E0909 00:31:18.501048 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:18.845478 kubelet[2739]: I0909 00:31:18.843536 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-54764" podStartSLOduration=5.843512145 podStartE2EDuration="5.843512145s" podCreationTimestamp="2025-09-09 00:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:14.441725226 +0000 UTC m=+4.363913430" watchObservedRunningTime="2025-09-09 00:31:18.843512145 +0000 UTC m=+8.765700329" Sep 9 00:31:19.042026 containerd[1582]: time="2025-09-09T00:31:19.041976656Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:19.044575 containerd[1582]: time="2025-09-09T00:31:19.044525521Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:31:19.047323 containerd[1582]: time="2025-09-09T00:31:19.047277287Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:19.050869 containerd[1582]: time="2025-09-09T00:31:19.050821237Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:19.053824 containerd[1582]: time="2025-09-09T00:31:19.053762622Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.25404452s" Sep 9 00:31:19.053824 containerd[1582]: time="2025-09-09T00:31:19.053803429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:31:19.059542 containerd[1582]: time="2025-09-09T00:31:19.059487564Z" level=info msg="CreateContainer within sandbox \"bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:31:19.078621 containerd[1582]: time="2025-09-09T00:31:19.078566848Z" level=info msg="Container 8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:19.085161 containerd[1582]: time="2025-09-09T00:31:19.085121573Z" level=info msg="CreateContainer within sandbox \"bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\"" Sep 9 00:31:19.086356 containerd[1582]: time="2025-09-09T00:31:19.085634049Z" level=info msg="StartContainer for \"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\"" Sep 9 00:31:19.086692 containerd[1582]: time="2025-09-09T00:31:19.086654994Z" level=info msg="connecting to shim 8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48" address="unix:///run/containerd/s/8c6eec9a192527dc35936f5adff74f1d2b93b4ad813fbfbfb8f22d568233a0db" protocol=ttrpc version=3 Sep 9 00:31:19.143522 systemd[1]: Started cri-containerd-8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48.scope - libcontainer container 8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48. Sep 9 00:31:19.179184 containerd[1582]: time="2025-09-09T00:31:19.179121269Z" level=info msg="StartContainer for \"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\" returns successfully" Sep 9 00:31:19.439545 kubelet[2739]: E0909 00:31:19.439484 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:19.476098 kubelet[2739]: I0909 00:31:19.475274 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-wkxsq" podStartSLOduration=1.220116159 podStartE2EDuration="5.475254061s" podCreationTimestamp="2025-09-09 00:31:14 +0000 UTC" firstStartedPulling="2025-09-09 00:31:14.799388911 +0000 UTC m=+4.721577095" lastFinishedPulling="2025-09-09 00:31:19.054526822 +0000 UTC m=+8.976714997" observedRunningTime="2025-09-09 00:31:19.475122844 +0000 UTC m=+9.397311028" watchObservedRunningTime="2025-09-09 00:31:19.475254061 +0000 UTC m=+9.397442245" Sep 9 00:31:20.717411 kubelet[2739]: E0909 00:31:20.717373 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:21.443452 kubelet[2739]: E0909 00:31:21.443359 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:26.372103 systemd[1]: cri-containerd-8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48.scope: Deactivated successfully. Sep 9 00:31:26.375029 containerd[1582]: time="2025-09-09T00:31:26.374972154Z" level=info msg="received exit event container_id:\"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\" id:\"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\" pid:3074 exit_status:1 exited_at:{seconds:1757377886 nanos:374421087}" Sep 9 00:31:26.375821 containerd[1582]: time="2025-09-09T00:31:26.375780545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\" id:\"8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48\" pid:3074 exit_status:1 exited_at:{seconds:1757377886 nanos:374421087}" Sep 9 00:31:26.410640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48-rootfs.mount: Deactivated successfully. Sep 9 00:31:27.459875 kubelet[2739]: I0909 00:31:27.459384 2739 scope.go:117] "RemoveContainer" containerID="8fc6ef229f39c47da529418ac264e160d923463774a990c2e66d5fe95894cd48" Sep 9 00:31:27.462310 containerd[1582]: time="2025-09-09T00:31:27.462225840Z" level=info msg="CreateContainer within sandbox \"bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 9 00:31:27.476069 containerd[1582]: time="2025-09-09T00:31:27.476000271Z" level=info msg="Container f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:27.488013 containerd[1582]: time="2025-09-09T00:31:27.487812931Z" level=info msg="CreateContainer within sandbox \"bba9fe6f4937483c492bd67f54b4bbb319afdaa5b97bf43b654757a127400f3d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3\"" Sep 9 00:31:27.490364 containerd[1582]: time="2025-09-09T00:31:27.488676115Z" level=info msg="StartContainer for \"f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3\"" Sep 9 00:31:27.491709 containerd[1582]: time="2025-09-09T00:31:27.491662513Z" level=info msg="connecting to shim f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3" address="unix:///run/containerd/s/8c6eec9a192527dc35936f5adff74f1d2b93b4ad813fbfbfb8f22d568233a0db" protocol=ttrpc version=3 Sep 9 00:31:27.528662 systemd[1]: Started cri-containerd-f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3.scope - libcontainer container f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3. Sep 9 00:31:27.572813 containerd[1582]: time="2025-09-09T00:31:27.572734106Z" level=info msg="StartContainer for \"f807df5e0672e0f0eeefb1c98f544f2e91772e8e4cd073033bf26a802b5f71c3\" returns successfully" Sep 9 00:31:29.965157 sudo[1784]: pam_unix(sudo:session): session closed for user root Sep 9 00:31:29.967041 sshd[1783]: Connection closed by 10.0.0.1 port 40590 Sep 9 00:31:29.967817 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:29.973120 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:40590.service: Deactivated successfully. Sep 9 00:31:29.975797 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:31:29.976079 systemd[1]: session-7.scope: Consumed 8.001s CPU time, 222.7M memory peak. Sep 9 00:31:29.977874 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:31:29.979329 systemd-logind[1554]: Removed session 7. Sep 9 00:31:33.372846 systemd[1]: Created slice kubepods-besteffort-poda64285de_5e8b_4871_b7a8_c6f97eef1e9f.slice - libcontainer container kubepods-besteffort-poda64285de_5e8b_4871_b7a8_c6f97eef1e9f.slice. Sep 9 00:31:33.394204 kubelet[2739]: I0909 00:31:33.394133 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrzr\" (UniqueName: \"kubernetes.io/projected/a64285de-5e8b-4871-b7a8-c6f97eef1e9f-kube-api-access-nbrzr\") pod \"calico-typha-6757d95b86-swgn6\" (UID: \"a64285de-5e8b-4871-b7a8-c6f97eef1e9f\") " pod="calico-system/calico-typha-6757d95b86-swgn6" Sep 9 00:31:33.394691 kubelet[2739]: I0909 00:31:33.394212 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a64285de-5e8b-4871-b7a8-c6f97eef1e9f-tigera-ca-bundle\") pod \"calico-typha-6757d95b86-swgn6\" (UID: \"a64285de-5e8b-4871-b7a8-c6f97eef1e9f\") " pod="calico-system/calico-typha-6757d95b86-swgn6" Sep 9 00:31:33.394691 kubelet[2739]: I0909 00:31:33.394243 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a64285de-5e8b-4871-b7a8-c6f97eef1e9f-typha-certs\") pod \"calico-typha-6757d95b86-swgn6\" (UID: \"a64285de-5e8b-4871-b7a8-c6f97eef1e9f\") " pod="calico-system/calico-typha-6757d95b86-swgn6" Sep 9 00:31:33.678877 kubelet[2739]: E0909 00:31:33.678822 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:33.679684 containerd[1582]: time="2025-09-09T00:31:33.679589354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6757d95b86-swgn6,Uid:a64285de-5e8b-4871-b7a8-c6f97eef1e9f,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:34.601555 systemd[1]: Created slice kubepods-besteffort-poda2813d2d_9c78_4ef4_a707_da6f91655f4c.slice - libcontainer container kubepods-besteffort-poda2813d2d_9c78_4ef4_a707_da6f91655f4c.slice. Sep 9 00:31:34.613770 containerd[1582]: time="2025-09-09T00:31:34.613693482Z" level=info msg="connecting to shim f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949" address="unix:///run/containerd/s/7535ce9d433bb9e31abd97f33049604b77fdd7910c46347dae6e79c1a1332b2a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:34.657938 systemd[1]: Started cri-containerd-f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949.scope - libcontainer container f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949. Sep 9 00:31:34.683369 kubelet[2739]: E0909 00:31:34.682878 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:34.703624 kubelet[2739]: I0909 00:31:34.703543 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-var-lib-calico\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704039 kubelet[2739]: I0909 00:31:34.703889 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-var-run-calico\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704039 kubelet[2739]: I0909 00:31:34.703944 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-cni-bin-dir\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704039 kubelet[2739]: I0909 00:31:34.703970 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-cni-net-dir\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704413 kubelet[2739]: I0909 00:31:34.704376 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-xtables-lock\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704465 kubelet[2739]: I0909 00:31:34.704427 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsfjv\" (UniqueName: \"kubernetes.io/projected/a2813d2d-9c78-4ef4-a707-da6f91655f4c-kube-api-access-dsfjv\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704512 kubelet[2739]: I0909 00:31:34.704478 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2813d2d-9c78-4ef4-a707-da6f91655f4c-tigera-ca-bundle\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704512 kubelet[2739]: I0909 00:31:34.704501 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-lib-modules\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.704658 kubelet[2739]: I0909 00:31:34.704630 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-policysync\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.707027 kubelet[2739]: I0909 00:31:34.704808 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-cni-log-dir\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.707027 kubelet[2739]: I0909 00:31:34.704853 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a2813d2d-9c78-4ef4-a707-da6f91655f4c-flexvol-driver-host\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.707027 kubelet[2739]: I0909 00:31:34.705813 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a2813d2d-9c78-4ef4-a707-da6f91655f4c-node-certs\") pod \"calico-node-z62td\" (UID: \"a2813d2d-9c78-4ef4-a707-da6f91655f4c\") " pod="calico-system/calico-node-z62td" Sep 9 00:31:34.763726 containerd[1582]: time="2025-09-09T00:31:34.763649450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6757d95b86-swgn6,Uid:a64285de-5e8b-4871-b7a8-c6f97eef1e9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949\"" Sep 9 00:31:34.768205 kubelet[2739]: E0909 00:31:34.768180 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:34.772467 containerd[1582]: time="2025-09-09T00:31:34.772435649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:31:34.806493 kubelet[2739]: I0909 00:31:34.806433 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd2d00d8-c926-49b1-9a33-424da0e8137a-kubelet-dir\") pod \"csi-node-driver-5ll8l\" (UID: \"fd2d00d8-c926-49b1-9a33-424da0e8137a\") " pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:34.806706 kubelet[2739]: I0909 00:31:34.806546 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd2d00d8-c926-49b1-9a33-424da0e8137a-socket-dir\") pod \"csi-node-driver-5ll8l\" (UID: \"fd2d00d8-c926-49b1-9a33-424da0e8137a\") " pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:34.807002 kubelet[2739]: I0909 00:31:34.806649 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd2d00d8-c926-49b1-9a33-424da0e8137a-registration-dir\") pod \"csi-node-driver-5ll8l\" (UID: \"fd2d00d8-c926-49b1-9a33-424da0e8137a\") " pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:34.807044 kubelet[2739]: I0909 00:31:34.807020 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq4wp\" (UniqueName: \"kubernetes.io/projected/fd2d00d8-c926-49b1-9a33-424da0e8137a-kube-api-access-vq4wp\") pod \"csi-node-driver-5ll8l\" (UID: \"fd2d00d8-c926-49b1-9a33-424da0e8137a\") " pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:34.807234 kubelet[2739]: I0909 00:31:34.807116 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd2d00d8-c926-49b1-9a33-424da0e8137a-varrun\") pod \"csi-node-driver-5ll8l\" (UID: \"fd2d00d8-c926-49b1-9a33-424da0e8137a\") " pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:34.819721 kubelet[2739]: E0909 00:31:34.819643 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.819973 kubelet[2739]: W0909 00:31:34.819683 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.820125 kubelet[2739]: E0909 00:31:34.820035 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.822863 kubelet[2739]: E0909 00:31:34.822732 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.822863 kubelet[2739]: W0909 00:31:34.822759 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.822863 kubelet[2739]: E0909 00:31:34.822793 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.908563 kubelet[2739]: E0909 00:31:34.908418 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.908563 kubelet[2739]: W0909 00:31:34.908450 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.908563 kubelet[2739]: E0909 00:31:34.908480 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.909578 kubelet[2739]: E0909 00:31:34.909441 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.909636 kubelet[2739]: W0909 00:31:34.909587 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.909685 containerd[1582]: time="2025-09-09T00:31:34.909595354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z62td,Uid:a2813d2d-9c78-4ef4-a707-da6f91655f4c,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:34.909725 kubelet[2739]: E0909 00:31:34.909651 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.910191 kubelet[2739]: E0909 00:31:34.910173 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.910230 kubelet[2739]: W0909 00:31:34.910190 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.910230 kubelet[2739]: E0909 00:31:34.910204 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.910666 kubelet[2739]: E0909 00:31:34.910651 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.910666 kubelet[2739]: W0909 00:31:34.910664 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.910739 kubelet[2739]: E0909 00:31:34.910675 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.910922 kubelet[2739]: E0909 00:31:34.910898 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.911001 kubelet[2739]: W0909 00:31:34.910945 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.911001 kubelet[2739]: E0909 00:31:34.910958 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.911238 kubelet[2739]: E0909 00:31:34.911220 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.911238 kubelet[2739]: W0909 00:31:34.911235 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.911354 kubelet[2739]: E0909 00:31:34.911248 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.911567 kubelet[2739]: E0909 00:31:34.911542 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.911638 kubelet[2739]: W0909 00:31:34.911589 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.911638 kubelet[2739]: E0909 00:31:34.911622 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.913308 kubelet[2739]: E0909 00:31:34.913277 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.913308 kubelet[2739]: W0909 00:31:34.913290 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.913308 kubelet[2739]: E0909 00:31:34.913304 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.913609 kubelet[2739]: E0909 00:31:34.913593 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.913609 kubelet[2739]: W0909 00:31:34.913605 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.913695 kubelet[2739]: E0909 00:31:34.913618 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.913892 kubelet[2739]: E0909 00:31:34.913877 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.913892 kubelet[2739]: W0909 00:31:34.913887 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.913995 kubelet[2739]: E0909 00:31:34.913896 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.914104 kubelet[2739]: E0909 00:31:34.914089 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.914104 kubelet[2739]: W0909 00:31:34.914099 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.914187 kubelet[2739]: E0909 00:31:34.914109 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.914297 kubelet[2739]: E0909 00:31:34.914284 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.914297 kubelet[2739]: W0909 00:31:34.914293 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.914435 kubelet[2739]: E0909 00:31:34.914301 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.914530 kubelet[2739]: E0909 00:31:34.914513 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.914530 kubelet[2739]: W0909 00:31:34.914527 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.914619 kubelet[2739]: E0909 00:31:34.914539 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.914741 kubelet[2739]: E0909 00:31:34.914724 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.914741 kubelet[2739]: W0909 00:31:34.914734 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.914825 kubelet[2739]: E0909 00:31:34.914745 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.914927 kubelet[2739]: E0909 00:31:34.914912 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.914927 kubelet[2739]: W0909 00:31:34.914923 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.915014 kubelet[2739]: E0909 00:31:34.914932 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.915178 kubelet[2739]: E0909 00:31:34.915161 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.915178 kubelet[2739]: W0909 00:31:34.915173 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.915280 kubelet[2739]: E0909 00:31:34.915183 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.915413 kubelet[2739]: E0909 00:31:34.915398 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.915413 kubelet[2739]: W0909 00:31:34.915409 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.915501 kubelet[2739]: E0909 00:31:34.915421 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.915612 kubelet[2739]: E0909 00:31:34.915595 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.915612 kubelet[2739]: W0909 00:31:34.915606 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.915697 kubelet[2739]: E0909 00:31:34.915616 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.915812 kubelet[2739]: E0909 00:31:34.915796 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.915812 kubelet[2739]: W0909 00:31:34.915807 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.915891 kubelet[2739]: E0909 00:31:34.915817 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.916058 kubelet[2739]: E0909 00:31:34.916033 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.916058 kubelet[2739]: W0909 00:31:34.916044 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.916058 kubelet[2739]: E0909 00:31:34.916055 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.916249 kubelet[2739]: E0909 00:31:34.916232 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.916249 kubelet[2739]: W0909 00:31:34.916243 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.916362 kubelet[2739]: E0909 00:31:34.916252 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.916534 kubelet[2739]: E0909 00:31:34.916502 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.916534 kubelet[2739]: W0909 00:31:34.916515 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.916534 kubelet[2739]: E0909 00:31:34.916525 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.916807 kubelet[2739]: E0909 00:31:34.916772 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.916807 kubelet[2739]: W0909 00:31:34.916784 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.916807 kubelet[2739]: E0909 00:31:34.916794 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.917134 kubelet[2739]: E0909 00:31:34.917039 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.917134 kubelet[2739]: W0909 00:31:34.917101 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.917134 kubelet[2739]: E0909 00:31:34.917117 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.917616 kubelet[2739]: E0909 00:31:34.917595 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.917616 kubelet[2739]: W0909 00:31:34.917611 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.917731 kubelet[2739]: E0909 00:31:34.917624 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:34.928129 kubelet[2739]: E0909 00:31:34.928085 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:34.928129 kubelet[2739]: W0909 00:31:34.928111 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:34.928129 kubelet[2739]: E0909 00:31:34.928134 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:35.018921 containerd[1582]: time="2025-09-09T00:31:35.018872874Z" level=info msg="connecting to shim 840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f" address="unix:///run/containerd/s/97634dbd63703fc548de1c18806adced51907f57e8532cfb9bdbdf2cd965d7f3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:35.046590 systemd[1]: Started cri-containerd-840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f.scope - libcontainer container 840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f. Sep 9 00:31:35.082857 containerd[1582]: time="2025-09-09T00:31:35.082792675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z62td,Uid:a2813d2d-9c78-4ef4-a707-da6f91655f4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\"" Sep 9 00:31:36.290329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755355800.mount: Deactivated successfully. Sep 9 00:31:36.377368 kubelet[2739]: E0909 00:31:36.377268 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:36.794172 containerd[1582]: time="2025-09-09T00:31:36.794104193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:36.794876 containerd[1582]: time="2025-09-09T00:31:36.794850295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:31:36.796019 containerd[1582]: time="2025-09-09T00:31:36.795980718Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:36.798139 containerd[1582]: time="2025-09-09T00:31:36.798101402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:36.798851 containerd[1582]: time="2025-09-09T00:31:36.798798151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.02615426s" Sep 9 00:31:36.798888 containerd[1582]: time="2025-09-09T00:31:36.798853055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:31:36.800163 containerd[1582]: time="2025-09-09T00:31:36.800121487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:31:36.815183 containerd[1582]: time="2025-09-09T00:31:36.815137987Z" level=info msg="CreateContainer within sandbox \"f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:31:36.825203 containerd[1582]: time="2025-09-09T00:31:36.825137702Z" level=info msg="Container dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:36.834994 containerd[1582]: time="2025-09-09T00:31:36.834928655Z" level=info msg="CreateContainer within sandbox \"f85feeeeeb7069fe1f35e4b471bedf715686d0be203c8df86d282e259024a949\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97\"" Sep 9 00:31:36.835675 containerd[1582]: time="2025-09-09T00:31:36.835624262Z" level=info msg="StartContainer for \"dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97\"" Sep 9 00:31:36.837141 containerd[1582]: time="2025-09-09T00:31:36.837107809Z" level=info msg="connecting to shim dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97" address="unix:///run/containerd/s/7535ce9d433bb9e31abd97f33049604b77fdd7910c46347dae6e79c1a1332b2a" protocol=ttrpc version=3 Sep 9 00:31:36.861483 systemd[1]: Started cri-containerd-dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97.scope - libcontainer container dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97. Sep 9 00:31:36.917164 containerd[1582]: time="2025-09-09T00:31:36.917111150Z" level=info msg="StartContainer for \"dfb810e0a9dabd7dbcdbb230f86829fb6ed2a9b75b44e1b660b9d90433fe7a97\" returns successfully" Sep 9 00:31:37.487988 kubelet[2739]: E0909 00:31:37.487949 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:37.491535 kubelet[2739]: E0909 00:31:37.491494 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.491535 kubelet[2739]: W0909 00:31:37.491520 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.491535 kubelet[2739]: E0909 00:31:37.491546 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.491977 kubelet[2739]: E0909 00:31:37.491909 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.491977 kubelet[2739]: W0909 00:31:37.491927 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.491977 kubelet[2739]: E0909 00:31:37.491939 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.492500 kubelet[2739]: E0909 00:31:37.492325 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.492500 kubelet[2739]: W0909 00:31:37.492366 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.492500 kubelet[2739]: E0909 00:31:37.492395 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.492742 kubelet[2739]: E0909 00:31:37.492725 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.492742 kubelet[2739]: W0909 00:31:37.492736 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.492816 kubelet[2739]: E0909 00:31:37.492746 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.493051 kubelet[2739]: E0909 00:31:37.493026 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.493051 kubelet[2739]: W0909 00:31:37.493043 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.493051 kubelet[2739]: E0909 00:31:37.493053 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.493363 kubelet[2739]: E0909 00:31:37.493319 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.493363 kubelet[2739]: W0909 00:31:37.493357 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.493448 kubelet[2739]: E0909 00:31:37.493374 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.493684 kubelet[2739]: E0909 00:31:37.493638 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.493684 kubelet[2739]: W0909 00:31:37.493664 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.493846 kubelet[2739]: E0909 00:31:37.493695 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.493975 kubelet[2739]: E0909 00:31:37.493942 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.493975 kubelet[2739]: W0909 00:31:37.493955 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.493975 kubelet[2739]: E0909 00:31:37.493964 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.494281 kubelet[2739]: E0909 00:31:37.494257 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.494281 kubelet[2739]: W0909 00:31:37.494273 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.494400 kubelet[2739]: E0909 00:31:37.494288 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.494573 kubelet[2739]: E0909 00:31:37.494556 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.494573 kubelet[2739]: W0909 00:31:37.494569 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.494640 kubelet[2739]: E0909 00:31:37.494580 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.494822 kubelet[2739]: E0909 00:31:37.494806 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.494822 kubelet[2739]: W0909 00:31:37.494818 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.494890 kubelet[2739]: E0909 00:31:37.494830 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.495110 kubelet[2739]: E0909 00:31:37.495069 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.495110 kubelet[2739]: W0909 00:31:37.495084 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.495110 kubelet[2739]: E0909 00:31:37.495094 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.495424 kubelet[2739]: E0909 00:31:37.495400 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.495424 kubelet[2739]: W0909 00:31:37.495415 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.495562 kubelet[2739]: E0909 00:31:37.495452 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.495834 kubelet[2739]: E0909 00:31:37.495761 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.495834 kubelet[2739]: W0909 00:31:37.495779 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.495834 kubelet[2739]: E0909 00:31:37.495793 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.496163 kubelet[2739]: E0909 00:31:37.496137 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.496163 kubelet[2739]: W0909 00:31:37.496156 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.496641 kubelet[2739]: E0909 00:31:37.496169 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.533162 kubelet[2739]: E0909 00:31:37.533118 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.533162 kubelet[2739]: W0909 00:31:37.533148 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.533162 kubelet[2739]: E0909 00:31:37.533174 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.533634 kubelet[2739]: E0909 00:31:37.533604 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.533634 kubelet[2739]: W0909 00:31:37.533617 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.533634 kubelet[2739]: E0909 00:31:37.533628 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.533923 kubelet[2739]: E0909 00:31:37.533902 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.533923 kubelet[2739]: W0909 00:31:37.533921 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.534002 kubelet[2739]: E0909 00:31:37.533933 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.534305 kubelet[2739]: E0909 00:31:37.534256 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.534305 kubelet[2739]: W0909 00:31:37.534296 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.534407 kubelet[2739]: E0909 00:31:37.534358 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.534683 kubelet[2739]: E0909 00:31:37.534649 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.534683 kubelet[2739]: W0909 00:31:37.534663 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.534683 kubelet[2739]: E0909 00:31:37.534674 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.534964 kubelet[2739]: E0909 00:31:37.534946 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.534964 kubelet[2739]: W0909 00:31:37.534958 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.535040 kubelet[2739]: E0909 00:31:37.534969 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.535240 kubelet[2739]: E0909 00:31:37.535221 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.535240 kubelet[2739]: W0909 00:31:37.535234 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.535393 kubelet[2739]: E0909 00:31:37.535245 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.535553 kubelet[2739]: E0909 00:31:37.535530 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.535553 kubelet[2739]: W0909 00:31:37.535545 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.535553 kubelet[2739]: E0909 00:31:37.535556 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.535817 kubelet[2739]: E0909 00:31:37.535799 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.535817 kubelet[2739]: W0909 00:31:37.535812 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.535888 kubelet[2739]: E0909 00:31:37.535822 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.536060 kubelet[2739]: E0909 00:31:37.536045 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.536060 kubelet[2739]: W0909 00:31:37.536056 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.536122 kubelet[2739]: E0909 00:31:37.536066 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.536327 kubelet[2739]: E0909 00:31:37.536308 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.536327 kubelet[2739]: W0909 00:31:37.536321 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.536419 kubelet[2739]: E0909 00:31:37.536331 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.536676 kubelet[2739]: E0909 00:31:37.536639 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.536676 kubelet[2739]: W0909 00:31:37.536655 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.536676 kubelet[2739]: E0909 00:31:37.536666 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.536906 kubelet[2739]: E0909 00:31:37.536828 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.536906 kubelet[2739]: W0909 00:31:37.536835 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.536906 kubelet[2739]: E0909 00:31:37.536843 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.537011 kubelet[2739]: E0909 00:31:37.536986 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.537011 kubelet[2739]: W0909 00:31:37.536993 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.537011 kubelet[2739]: E0909 00:31:37.537000 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.537156 kubelet[2739]: E0909 00:31:37.537140 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.537156 kubelet[2739]: W0909 00:31:37.537149 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.537156 kubelet[2739]: E0909 00:31:37.537159 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.537406 kubelet[2739]: E0909 00:31:37.537386 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.537406 kubelet[2739]: W0909 00:31:37.537400 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.537495 kubelet[2739]: E0909 00:31:37.537411 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.537624 kubelet[2739]: E0909 00:31:37.537607 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.537624 kubelet[2739]: W0909 00:31:37.537619 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.537703 kubelet[2739]: E0909 00:31:37.537629 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:37.538040 kubelet[2739]: E0909 00:31:37.538009 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:31:37.538040 kubelet[2739]: W0909 00:31:37.538024 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:31:37.538040 kubelet[2739]: E0909 00:31:37.538035 2739 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:31:38.265750 containerd[1582]: time="2025-09-09T00:31:38.265706091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:38.266488 containerd[1582]: time="2025-09-09T00:31:38.266450230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:31:38.267656 containerd[1582]: time="2025-09-09T00:31:38.267631258Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:38.269483 containerd[1582]: time="2025-09-09T00:31:38.269437240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:38.269981 containerd[1582]: time="2025-09-09T00:31:38.269948369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.469795713s" Sep 9 00:31:38.270023 containerd[1582]: time="2025-09-09T00:31:38.269976533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:31:38.274295 containerd[1582]: time="2025-09-09T00:31:38.274262212Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:31:38.281537 containerd[1582]: time="2025-09-09T00:31:38.281502490Z" level=info msg="Container 6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:38.291488 containerd[1582]: time="2025-09-09T00:31:38.291447499Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\"" Sep 9 00:31:38.291929 containerd[1582]: time="2025-09-09T00:31:38.291902062Z" level=info msg="StartContainer for \"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\"" Sep 9 00:31:38.296255 containerd[1582]: time="2025-09-09T00:31:38.295894501Z" level=info msg="connecting to shim 6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2" address="unix:///run/containerd/s/97634dbd63703fc548de1c18806adced51907f57e8532cfb9bdbdf2cd965d7f3" protocol=ttrpc version=3 Sep 9 00:31:38.324560 systemd[1]: Started cri-containerd-6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2.scope - libcontainer container 6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2. Sep 9 00:31:38.369501 containerd[1582]: time="2025-09-09T00:31:38.369413493Z" level=info msg="StartContainer for \"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\" returns successfully" Sep 9 00:31:38.373790 kubelet[2739]: E0909 00:31:38.373736 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:38.379324 systemd[1]: cri-containerd-6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2.scope: Deactivated successfully. Sep 9 00:31:38.380921 containerd[1582]: time="2025-09-09T00:31:38.380851526Z" level=info msg="received exit event container_id:\"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\" id:\"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\" pid:3432 exited_at:{seconds:1757377898 nanos:380556151}" Sep 9 00:31:38.381006 containerd[1582]: time="2025-09-09T00:31:38.380853159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\" id:\"6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2\" pid:3432 exited_at:{seconds:1757377898 nanos:380556151}" Sep 9 00:31:38.402492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6498b816f6765bba44f1c7547112aa5ecce5cc3aa7dd4a515fda0916cd3461b2-rootfs.mount: Deactivated successfully. Sep 9 00:31:38.492252 kubelet[2739]: I0909 00:31:38.492182 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:31:38.492836 kubelet[2739]: E0909 00:31:38.492669 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:38.728382 kubelet[2739]: I0909 00:31:38.727875 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6757d95b86-swgn6" podStartSLOduration=3.697193841 podStartE2EDuration="5.727852124s" podCreationTimestamp="2025-09-09 00:31:33 +0000 UTC" firstStartedPulling="2025-09-09 00:31:34.769196964 +0000 UTC m=+24.691385148" lastFinishedPulling="2025-09-09 00:31:36.799855237 +0000 UTC m=+26.722043431" observedRunningTime="2025-09-09 00:31:37.500897752 +0000 UTC m=+27.423085936" watchObservedRunningTime="2025-09-09 00:31:38.727852124 +0000 UTC m=+28.650040308" Sep 9 00:31:39.497632 containerd[1582]: time="2025-09-09T00:31:39.497578279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:31:40.374268 kubelet[2739]: E0909 00:31:40.373875 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:42.101656 containerd[1582]: time="2025-09-09T00:31:42.101574956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:42.102410 containerd[1582]: time="2025-09-09T00:31:42.102377913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:31:42.103841 containerd[1582]: time="2025-09-09T00:31:42.103806966Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:42.106356 containerd[1582]: time="2025-09-09T00:31:42.106315006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:42.106963 containerd[1582]: time="2025-09-09T00:31:42.106928798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.609306826s" Sep 9 00:31:42.106963 containerd[1582]: time="2025-09-09T00:31:42.106954867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:31:42.112006 containerd[1582]: time="2025-09-09T00:31:42.111958141Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:31:42.121802 containerd[1582]: time="2025-09-09T00:31:42.121744646Z" level=info msg="Container 99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:42.136890 containerd[1582]: time="2025-09-09T00:31:42.136836521Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\"" Sep 9 00:31:42.137458 containerd[1582]: time="2025-09-09T00:31:42.137429835Z" level=info msg="StartContainer for \"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\"" Sep 9 00:31:42.138806 containerd[1582]: time="2025-09-09T00:31:42.138785009Z" level=info msg="connecting to shim 99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31" address="unix:///run/containerd/s/97634dbd63703fc548de1c18806adced51907f57e8532cfb9bdbdf2cd965d7f3" protocol=ttrpc version=3 Sep 9 00:31:42.166673 systemd[1]: Started cri-containerd-99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31.scope - libcontainer container 99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31. Sep 9 00:31:42.212568 containerd[1582]: time="2025-09-09T00:31:42.212532583Z" level=info msg="StartContainer for \"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\" returns successfully" Sep 9 00:31:42.381142 kubelet[2739]: E0909 00:31:42.374604 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:43.269653 systemd[1]: cri-containerd-99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31.scope: Deactivated successfully. Sep 9 00:31:43.270006 systemd[1]: cri-containerd-99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31.scope: Consumed 563ms CPU time, 177.5M memory peak, 2.2M read from disk, 171.3M written to disk. Sep 9 00:31:43.272273 containerd[1582]: time="2025-09-09T00:31:43.271985055Z" level=info msg="received exit event container_id:\"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\" id:\"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\" pid:3491 exited_at:{seconds:1757377903 nanos:271781955}" Sep 9 00:31:43.272273 containerd[1582]: time="2025-09-09T00:31:43.272069574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\" id:\"99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31\" pid:3491 exited_at:{seconds:1757377903 nanos:271781955}" Sep 9 00:31:43.300020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99bcc4ba53751d17238a7da792d9f8daade6ad1ca2ede87313e45d19f0c55c31-rootfs.mount: Deactivated successfully. Sep 9 00:31:43.367979 kubelet[2739]: I0909 00:31:43.367379 2739 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:31:43.714105 systemd[1]: Created slice kubepods-burstable-pod10bfdd1d_9f60_4b7d_87e4_fd70f1d75d76.slice - libcontainer container kubepods-burstable-pod10bfdd1d_9f60_4b7d_87e4_fd70f1d75d76.slice. Sep 9 00:31:43.723880 systemd[1]: Created slice kubepods-besteffort-pod3d599c83_8c9e_4335_b7dd_600759b6c019.slice - libcontainer container kubepods-besteffort-pod3d599c83_8c9e_4335_b7dd_600759b6c019.slice. Sep 9 00:31:43.732225 systemd[1]: Created slice kubepods-burstable-pod2dffbe49_c6eb_4f7e_b045_f374fec43167.slice - libcontainer container kubepods-burstable-pod2dffbe49_c6eb_4f7e_b045_f374fec43167.slice. Sep 9 00:31:43.740675 systemd[1]: Created slice kubepods-besteffort-pod28ab3406_05ce_48cf_8ad2_a98587542055.slice - libcontainer container kubepods-besteffort-pod28ab3406_05ce_48cf_8ad2_a98587542055.slice. Sep 9 00:31:43.749574 systemd[1]: Created slice kubepods-besteffort-pod2ea61d08_0343_4cf1_a232_a76b31169db1.slice - libcontainer container kubepods-besteffort-pod2ea61d08_0343_4cf1_a232_a76b31169db1.slice. Sep 9 00:31:43.754977 systemd[1]: Created slice kubepods-besteffort-pod0ff81a40_8102_48fe_98ba_e9315dead66d.slice - libcontainer container kubepods-besteffort-pod0ff81a40_8102_48fe_98ba_e9315dead66d.slice. Sep 9 00:31:43.761925 systemd[1]: Created slice kubepods-besteffort-pod5a95b016_00c9_4c01_bbca_50da884e7b1c.slice - libcontainer container kubepods-besteffort-pod5a95b016_00c9_4c01_bbca_50da884e7b1c.slice. Sep 9 00:31:43.772689 kubelet[2739]: I0909 00:31:43.772598 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vfk\" (UniqueName: \"kubernetes.io/projected/3d599c83-8c9e-4335-b7dd-600759b6c019-kube-api-access-p7vfk\") pod \"whisker-8498bd8b7-x262s\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " pod="calico-system/whisker-8498bd8b7-x262s" Sep 9 00:31:43.772689 kubelet[2739]: I0909 00:31:43.772633 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5a95b016-00c9-4c01-bbca-50da884e7b1c-goldmane-key-pair\") pod \"goldmane-54d579b49d-dndfz\" (UID: \"5a95b016-00c9-4c01-bbca-50da884e7b1c\") " pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:43.772689 kubelet[2739]: I0909 00:31:43.772653 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcggk\" (UniqueName: \"kubernetes.io/projected/2ea61d08-0343-4cf1-a232-a76b31169db1-kube-api-access-vcggk\") pod \"calico-kube-controllers-5985c77b98-2wnmf\" (UID: \"2ea61d08-0343-4cf1-a232-a76b31169db1\") " pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" Sep 9 00:31:43.772689 kubelet[2739]: I0909 00:31:43.772673 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76-config-volume\") pod \"coredns-674b8bbfcf-lmmvz\" (UID: \"10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76\") " pod="kube-system/coredns-674b8bbfcf-lmmvz" Sep 9 00:31:43.772689 kubelet[2739]: I0909 00:31:43.772692 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dffbe49-c6eb-4f7e-b045-f374fec43167-config-volume\") pod \"coredns-674b8bbfcf-rx7rx\" (UID: \"2dffbe49-c6eb-4f7e-b045-f374fec43167\") " pod="kube-system/coredns-674b8bbfcf-rx7rx" Sep 9 00:31:43.773480 kubelet[2739]: I0909 00:31:43.772711 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4k5w\" (UniqueName: \"kubernetes.io/projected/5a95b016-00c9-4c01-bbca-50da884e7b1c-kube-api-access-g4k5w\") pod \"goldmane-54d579b49d-dndfz\" (UID: \"5a95b016-00c9-4c01-bbca-50da884e7b1c\") " pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:43.773480 kubelet[2739]: I0909 00:31:43.772739 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ea61d08-0343-4cf1-a232-a76b31169db1-tigera-ca-bundle\") pod \"calico-kube-controllers-5985c77b98-2wnmf\" (UID: \"2ea61d08-0343-4cf1-a232-a76b31169db1\") " pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" Sep 9 00:31:43.773480 kubelet[2739]: I0909 00:31:43.772758 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ff81a40-8102-48fe-98ba-e9315dead66d-calico-apiserver-certs\") pod \"calico-apiserver-6ccf697dfd-99755\" (UID: \"0ff81a40-8102-48fe-98ba-e9315dead66d\") " pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" Sep 9 00:31:43.773480 kubelet[2739]: I0909 00:31:43.772776 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a95b016-00c9-4c01-bbca-50da884e7b1c-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-dndfz\" (UID: \"5a95b016-00c9-4c01-bbca-50da884e7b1c\") " pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:43.773480 kubelet[2739]: I0909 00:31:43.772795 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88d9k\" (UniqueName: \"kubernetes.io/projected/10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76-kube-api-access-88d9k\") pod \"coredns-674b8bbfcf-lmmvz\" (UID: \"10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76\") " pod="kube-system/coredns-674b8bbfcf-lmmvz" Sep 9 00:31:43.773641 kubelet[2739]: I0909 00:31:43.772812 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28ab3406-05ce-48cf-8ad2-a98587542055-calico-apiserver-certs\") pod \"calico-apiserver-6ccf697dfd-qb7sn\" (UID: \"28ab3406-05ce-48cf-8ad2-a98587542055\") " pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" Sep 9 00:31:43.773641 kubelet[2739]: I0909 00:31:43.772831 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zbd\" (UniqueName: \"kubernetes.io/projected/28ab3406-05ce-48cf-8ad2-a98587542055-kube-api-access-c7zbd\") pod \"calico-apiserver-6ccf697dfd-qb7sn\" (UID: \"28ab3406-05ce-48cf-8ad2-a98587542055\") " pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" Sep 9 00:31:43.773641 kubelet[2739]: I0909 00:31:43.772876 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hkxq\" (UniqueName: \"kubernetes.io/projected/0ff81a40-8102-48fe-98ba-e9315dead66d-kube-api-access-6hkxq\") pod \"calico-apiserver-6ccf697dfd-99755\" (UID: \"0ff81a40-8102-48fe-98ba-e9315dead66d\") " pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" Sep 9 00:31:43.773641 kubelet[2739]: I0909 00:31:43.772899 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-backend-key-pair\") pod \"whisker-8498bd8b7-x262s\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " pod="calico-system/whisker-8498bd8b7-x262s" Sep 9 00:31:43.773641 kubelet[2739]: I0909 00:31:43.772916 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a95b016-00c9-4c01-bbca-50da884e7b1c-config\") pod \"goldmane-54d579b49d-dndfz\" (UID: \"5a95b016-00c9-4c01-bbca-50da884e7b1c\") " pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:43.773833 kubelet[2739]: I0909 00:31:43.773020 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-ca-bundle\") pod \"whisker-8498bd8b7-x262s\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " pod="calico-system/whisker-8498bd8b7-x262s" Sep 9 00:31:43.773833 kubelet[2739]: I0909 00:31:43.773040 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9gh\" (UniqueName: \"kubernetes.io/projected/2dffbe49-c6eb-4f7e-b045-f374fec43167-kube-api-access-xh9gh\") pod \"coredns-674b8bbfcf-rx7rx\" (UID: \"2dffbe49-c6eb-4f7e-b045-f374fec43167\") " pod="kube-system/coredns-674b8bbfcf-rx7rx" Sep 9 00:31:44.020177 kubelet[2739]: E0909 00:31:44.020009 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:44.020980 containerd[1582]: time="2025-09-09T00:31:44.020912941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmmvz,Uid:10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:44.029206 containerd[1582]: time="2025-09-09T00:31:44.029156426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8498bd8b7-x262s,Uid:3d599c83-8c9e-4335-b7dd-600759b6c019,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:44.035823 kubelet[2739]: E0909 00:31:44.035779 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:44.046549 containerd[1582]: time="2025-09-09T00:31:44.046482472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx7rx,Uid:2dffbe49-c6eb-4f7e-b045-f374fec43167,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:44.046848 containerd[1582]: time="2025-09-09T00:31:44.046483143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-qb7sn,Uid:28ab3406-05ce-48cf-8ad2-a98587542055,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:31:44.054378 containerd[1582]: time="2025-09-09T00:31:44.054296320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5985c77b98-2wnmf,Uid:2ea61d08-0343-4cf1-a232-a76b31169db1,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:44.059952 containerd[1582]: time="2025-09-09T00:31:44.059878981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-99755,Uid:0ff81a40-8102-48fe-98ba-e9315dead66d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:31:44.066902 containerd[1582]: time="2025-09-09T00:31:44.066831522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dndfz,Uid:5a95b016-00c9-4c01-bbca-50da884e7b1c,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:44.175311 containerd[1582]: time="2025-09-09T00:31:44.175253911Z" level=error msg="Failed to destroy network for sandbox \"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.177448 containerd[1582]: time="2025-09-09T00:31:44.177376986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmmvz,Uid:10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.184450 containerd[1582]: time="2025-09-09T00:31:44.184408737Z" level=error msg="Failed to destroy network for sandbox \"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.206308 containerd[1582]: time="2025-09-09T00:31:44.206131477Z" level=error msg="Failed to destroy network for sandbox \"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.210658 containerd[1582]: time="2025-09-09T00:31:44.210611627Z" level=error msg="Failed to destroy network for sandbox \"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.213956 kubelet[2739]: E0909 00:31:44.213889 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.214083 kubelet[2739]: E0909 00:31:44.214002 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lmmvz" Sep 9 00:31:44.214083 kubelet[2739]: E0909 00:31:44.214036 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lmmvz" Sep 9 00:31:44.214173 kubelet[2739]: E0909 00:31:44.214119 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lmmvz_kube-system(10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lmmvz_kube-system(10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"469c5fb27eb1a51e8f4f0c3a04598308538f0b18a7e209f1d3fa25f17ee08aea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lmmvz" podUID="10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76" Sep 9 00:31:44.214326 containerd[1582]: time="2025-09-09T00:31:44.213906553Z" level=error msg="Failed to destroy network for sandbox \"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.218193 containerd[1582]: time="2025-09-09T00:31:44.218165738Z" level=error msg="Failed to destroy network for sandbox \"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.226825 containerd[1582]: time="2025-09-09T00:31:44.226772014Z" level=error msg="Failed to destroy network for sandbox \"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.227508 containerd[1582]: time="2025-09-09T00:31:44.227449216Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-99755,Uid:0ff81a40-8102-48fe-98ba-e9315dead66d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.227729 kubelet[2739]: E0909 00:31:44.227693 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.227807 kubelet[2739]: E0909 00:31:44.227762 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" Sep 9 00:31:44.227807 kubelet[2739]: E0909 00:31:44.227794 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" Sep 9 00:31:44.227892 kubelet[2739]: E0909 00:31:44.227864 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf697dfd-99755_calico-apiserver(0ff81a40-8102-48fe-98ba-e9315dead66d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf697dfd-99755_calico-apiserver(0ff81a40-8102-48fe-98ba-e9315dead66d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53bccfa10a0a755554b3774d4f4c7e7875f8cdc9371b7a44ab6918031336b746\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" podUID="0ff81a40-8102-48fe-98ba-e9315dead66d" Sep 9 00:31:44.228811 containerd[1582]: time="2025-09-09T00:31:44.228736903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-qb7sn,Uid:28ab3406-05ce-48cf-8ad2-a98587542055,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.229057 kubelet[2739]: E0909 00:31:44.229008 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.229139 kubelet[2739]: E0909 00:31:44.229107 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" Sep 9 00:31:44.229187 kubelet[2739]: E0909 00:31:44.229137 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" Sep 9 00:31:44.229224 kubelet[2739]: E0909 00:31:44.229190 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf697dfd-qb7sn_calico-apiserver(28ab3406-05ce-48cf-8ad2-a98587542055)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf697dfd-qb7sn_calico-apiserver(28ab3406-05ce-48cf-8ad2-a98587542055)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86edf1cd97d201775d6077fdc9c7a624c93f32f3e45a3bdf0211383d57b6da2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" podUID="28ab3406-05ce-48cf-8ad2-a98587542055" Sep 9 00:31:44.230198 containerd[1582]: time="2025-09-09T00:31:44.230127744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8498bd8b7-x262s,Uid:3d599c83-8c9e-4335-b7dd-600759b6c019,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.230514 kubelet[2739]: E0909 00:31:44.230447 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.230576 kubelet[2739]: E0909 00:31:44.230535 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8498bd8b7-x262s" Sep 9 00:31:44.230576 kubelet[2739]: E0909 00:31:44.230562 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8498bd8b7-x262s" Sep 9 00:31:44.230651 kubelet[2739]: E0909 00:31:44.230624 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8498bd8b7-x262s_calico-system(3d599c83-8c9e-4335-b7dd-600759b6c019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8498bd8b7-x262s_calico-system(3d599c83-8c9e-4335-b7dd-600759b6c019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602460fc3991db5eaefb5255f70b8075bca810c16ab2aa82c3eb4f2e7c97ef2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8498bd8b7-x262s" podUID="3d599c83-8c9e-4335-b7dd-600759b6c019" Sep 9 00:31:44.231593 containerd[1582]: time="2025-09-09T00:31:44.231516701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5985c77b98-2wnmf,Uid:2ea61d08-0343-4cf1-a232-a76b31169db1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.231929 kubelet[2739]: E0909 00:31:44.231879 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.231929 kubelet[2739]: E0909 00:31:44.231933 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" Sep 9 00:31:44.232144 kubelet[2739]: E0909 00:31:44.231957 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" Sep 9 00:31:44.232144 kubelet[2739]: E0909 00:31:44.232006 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5985c77b98-2wnmf_calico-system(2ea61d08-0343-4cf1-a232-a76b31169db1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5985c77b98-2wnmf_calico-system(2ea61d08-0343-4cf1-a232-a76b31169db1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ac3b7f92df68772ea68d2e368b9e8e1fdc04049910b02bcc5117ece05f32b44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" podUID="2ea61d08-0343-4cf1-a232-a76b31169db1" Sep 9 00:31:44.233287 containerd[1582]: time="2025-09-09T00:31:44.233226341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx7rx,Uid:2dffbe49-c6eb-4f7e-b045-f374fec43167,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.233448 kubelet[2739]: E0909 00:31:44.233398 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.233448 kubelet[2739]: E0909 00:31:44.233437 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rx7rx" Sep 9 00:31:44.233547 kubelet[2739]: E0909 00:31:44.233459 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rx7rx" Sep 9 00:31:44.233547 kubelet[2739]: E0909 00:31:44.233506 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rx7rx_kube-system(2dffbe49-c6eb-4f7e-b045-f374fec43167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rx7rx_kube-system(2dffbe49-c6eb-4f7e-b045-f374fec43167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a96ca61bf3986cef3168b2f6606ed02226c00ea7cfc19d46db5e2ed09f4e6d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rx7rx" podUID="2dffbe49-c6eb-4f7e-b045-f374fec43167" Sep 9 00:31:44.235844 containerd[1582]: time="2025-09-09T00:31:44.235769294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dndfz,Uid:5a95b016-00c9-4c01-bbca-50da884e7b1c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.236031 kubelet[2739]: E0909 00:31:44.235990 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.236092 kubelet[2739]: E0909 00:31:44.236030 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:44.236092 kubelet[2739]: E0909 00:31:44.236051 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-dndfz" Sep 9 00:31:44.236156 kubelet[2739]: E0909 00:31:44.236104 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-dndfz_calico-system(5a95b016-00c9-4c01-bbca-50da884e7b1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-dndfz_calico-system(5a95b016-00c9-4c01-bbca-50da884e7b1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a20027fd0361ea77c1f9f32c55a013baf682065119982a55e9d287123c4c6bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-dndfz" podUID="5a95b016-00c9-4c01-bbca-50da884e7b1c" Sep 9 00:31:44.379998 systemd[1]: Created slice kubepods-besteffort-podfd2d00d8_c926_49b1_9a33_424da0e8137a.slice - libcontainer container kubepods-besteffort-podfd2d00d8_c926_49b1_9a33_424da0e8137a.slice. Sep 9 00:31:44.384048 containerd[1582]: time="2025-09-09T00:31:44.384002399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ll8l,Uid:fd2d00d8-c926-49b1-9a33-424da0e8137a,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:44.434134 containerd[1582]: time="2025-09-09T00:31:44.434041746Z" level=error msg="Failed to destroy network for sandbox \"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.436025 containerd[1582]: time="2025-09-09T00:31:44.435986097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ll8l,Uid:fd2d00d8-c926-49b1-9a33-424da0e8137a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.436386 kubelet[2739]: E0909 00:31:44.436316 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:31:44.436469 kubelet[2739]: E0909 00:31:44.436422 2739 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:44.436469 kubelet[2739]: E0909 00:31:44.436450 2739 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5ll8l" Sep 9 00:31:44.436672 kubelet[2739]: E0909 00:31:44.436562 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5ll8l_calico-system(fd2d00d8-c926-49b1-9a33-424da0e8137a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5ll8l_calico-system(fd2d00d8-c926-49b1-9a33-424da0e8137a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc8249953637920bea3a5356c4227a32f77b9ba49327bb4f41b7d376ac4637fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5ll8l" podUID="fd2d00d8-c926-49b1-9a33-424da0e8137a" Sep 9 00:31:44.436764 systemd[1]: run-netns-cni\x2d77abf201\x2dd1ed\x2d1532\x2d0d43\x2d9f077ba111b1.mount: Deactivated successfully. Sep 9 00:31:44.513469 containerd[1582]: time="2025-09-09T00:31:44.513416875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:31:52.230860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450793060.mount: Deactivated successfully. Sep 9 00:31:53.940684 kubelet[2739]: E0909 00:31:53.940638 2739 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.567s" Sep 9 00:31:53.943911 containerd[1582]: time="2025-09-09T00:31:53.943867417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:53.945243 containerd[1582]: time="2025-09-09T00:31:53.945201341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:31:53.946730 containerd[1582]: time="2025-09-09T00:31:53.946675757Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:53.949255 containerd[1582]: time="2025-09-09T00:31:53.949204983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:31:53.949991 containerd[1582]: time="2025-09-09T00:31:53.949921127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.436452546s" Sep 9 00:31:53.949991 containerd[1582]: time="2025-09-09T00:31:53.949977312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:31:54.072449 containerd[1582]: time="2025-09-09T00:31:54.072398822Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:31:54.085085 containerd[1582]: time="2025-09-09T00:31:54.084783069Z" level=info msg="Container f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:54.098267 containerd[1582]: time="2025-09-09T00:31:54.098224460Z" level=info msg="CreateContainer within sandbox \"840d07d972d5a43ab64d10e520882284ea970c7d8f9dcbdc3ca3299ea2cf696f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\"" Sep 9 00:31:54.100865 containerd[1582]: time="2025-09-09T00:31:54.098803827Z" level=info msg="StartContainer for \"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\"" Sep 9 00:31:54.100865 containerd[1582]: time="2025-09-09T00:31:54.100165422Z" level=info msg="connecting to shim f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f" address="unix:///run/containerd/s/97634dbd63703fc548de1c18806adced51907f57e8532cfb9bdbdf2cd965d7f3" protocol=ttrpc version=3 Sep 9 00:31:54.134536 systemd[1]: Started cri-containerd-f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f.scope - libcontainer container f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f. Sep 9 00:31:54.188664 containerd[1582]: time="2025-09-09T00:31:54.188497145Z" level=info msg="StartContainer for \"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\" returns successfully" Sep 9 00:31:54.265745 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:31:54.266911 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:31:54.434591 kubelet[2739]: I0909 00:31:54.433869 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-backend-key-pair\") pod \"3d599c83-8c9e-4335-b7dd-600759b6c019\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " Sep 9 00:31:54.435068 kubelet[2739]: I0909 00:31:54.434803 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7vfk\" (UniqueName: \"kubernetes.io/projected/3d599c83-8c9e-4335-b7dd-600759b6c019-kube-api-access-p7vfk\") pod \"3d599c83-8c9e-4335-b7dd-600759b6c019\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " Sep 9 00:31:54.435721 kubelet[2739]: I0909 00:31:54.435694 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3d599c83-8c9e-4335-b7dd-600759b6c019" (UID: "3d599c83-8c9e-4335-b7dd-600759b6c019"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:31:54.435904 kubelet[2739]: I0909 00:31:54.435885 2739 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-ca-bundle\") pod \"3d599c83-8c9e-4335-b7dd-600759b6c019\" (UID: \"3d599c83-8c9e-4335-b7dd-600759b6c019\") " Sep 9 00:31:54.439615 kubelet[2739]: I0909 00:31:54.439558 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3d599c83-8c9e-4335-b7dd-600759b6c019" (UID: "3d599c83-8c9e-4335-b7dd-600759b6c019"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:31:54.439873 kubelet[2739]: I0909 00:31:54.439847 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d599c83-8c9e-4335-b7dd-600759b6c019-kube-api-access-p7vfk" (OuterVolumeSpecName: "kube-api-access-p7vfk") pod "3d599c83-8c9e-4335-b7dd-600759b6c019" (UID: "3d599c83-8c9e-4335-b7dd-600759b6c019"). InnerVolumeSpecName "kube-api-access-p7vfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:31:54.536822 kubelet[2739]: I0909 00:31:54.536762 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:54.536822 kubelet[2739]: I0909 00:31:54.536811 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d599c83-8c9e-4335-b7dd-600759b6c019-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:54.536822 kubelet[2739]: I0909 00:31:54.536827 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7vfk\" (UniqueName: \"kubernetes.io/projected/3d599c83-8c9e-4335-b7dd-600759b6c019-kube-api-access-p7vfk\") on node \"localhost\" DevicePath \"\"" Sep 9 00:31:54.954291 systemd[1]: Removed slice kubepods-besteffort-pod3d599c83_8c9e_4335_b7dd_600759b6c019.slice - libcontainer container kubepods-besteffort-pod3d599c83_8c9e_4335_b7dd_600759b6c019.slice. Sep 9 00:31:54.969053 kubelet[2739]: I0909 00:31:54.968983 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z62td" podStartSLOduration=2.098927317 podStartE2EDuration="20.968965405s" podCreationTimestamp="2025-09-09 00:31:34 +0000 UTC" firstStartedPulling="2025-09-09 00:31:35.084020041 +0000 UTC m=+25.006208225" lastFinishedPulling="2025-09-09 00:31:53.954058129 +0000 UTC m=+43.876246313" observedRunningTime="2025-09-09 00:31:54.968417496 +0000 UTC m=+44.890605680" watchObservedRunningTime="2025-09-09 00:31:54.968965405 +0000 UTC m=+44.891153599" Sep 9 00:31:54.975950 systemd[1]: var-lib-kubelet-pods-3d599c83\x2d8c9e\x2d4335\x2db7dd\x2d600759b6c019-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7vfk.mount: Deactivated successfully. Sep 9 00:31:54.976115 systemd[1]: var-lib-kubelet-pods-3d599c83\x2d8c9e\x2d4335\x2db7dd\x2d600759b6c019-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:31:55.031735 systemd[1]: Created slice kubepods-besteffort-pod6c240194_abba_4587_aa33_c4b69c9430a2.slice - libcontainer container kubepods-besteffort-pod6c240194_abba_4587_aa33_c4b69c9430a2.slice. Sep 9 00:31:55.040776 kubelet[2739]: I0909 00:31:55.040695 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrkr\" (UniqueName: \"kubernetes.io/projected/6c240194-abba-4587-aa33-c4b69c9430a2-kube-api-access-gwrkr\") pod \"whisker-6557f4c94b-2gvbb\" (UID: \"6c240194-abba-4587-aa33-c4b69c9430a2\") " pod="calico-system/whisker-6557f4c94b-2gvbb" Sep 9 00:31:55.040913 kubelet[2739]: I0909 00:31:55.040836 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c240194-abba-4587-aa33-c4b69c9430a2-whisker-ca-bundle\") pod \"whisker-6557f4c94b-2gvbb\" (UID: \"6c240194-abba-4587-aa33-c4b69c9430a2\") " pod="calico-system/whisker-6557f4c94b-2gvbb" Sep 9 00:31:55.041058 kubelet[2739]: I0909 00:31:55.040911 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c240194-abba-4587-aa33-c4b69c9430a2-whisker-backend-key-pair\") pod \"whisker-6557f4c94b-2gvbb\" (UID: \"6c240194-abba-4587-aa33-c4b69c9430a2\") " pod="calico-system/whisker-6557f4c94b-2gvbb" Sep 9 00:31:55.337513 containerd[1582]: time="2025-09-09T00:31:55.337319959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6557f4c94b-2gvbb,Uid:6c240194-abba-4587-aa33-c4b69c9430a2,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:55.374491 kubelet[2739]: E0909 00:31:55.374454 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:55.374943 containerd[1582]: time="2025-09-09T00:31:55.374796737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx7rx,Uid:2dffbe49-c6eb-4f7e-b045-f374fec43167,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:55.375101 containerd[1582]: time="2025-09-09T00:31:55.374968779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ll8l,Uid:fd2d00d8-c926-49b1-9a33-424da0e8137a,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:56.166774 systemd-networkd[1506]: cali06741bea4d3: Link UP Sep 9 00:31:56.167419 systemd-networkd[1506]: cali06741bea4d3: Gained carrier Sep 9 00:31:56.197521 containerd[1582]: 2025-09-09 00:31:55.629 [INFO][3953] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:56.197521 containerd[1582]: 2025-09-09 00:31:55.723 [INFO][3953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0 coredns-674b8bbfcf- kube-system 2dffbe49-c6eb-4f7e-b045-f374fec43167 846 0 2025-09-09 00:31:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rx7rx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali06741bea4d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-" Sep 9 00:31:56.197521 containerd[1582]: 2025-09-09 00:31:55.727 [INFO][3953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.197521 containerd[1582]: 2025-09-09 00:31:56.116 [INFO][3979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" HandleID="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Workload="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.117 [INFO][3979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" HandleID="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Workload="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rx7rx", "timestamp":"2025-09-09 00:31:56.11630343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.117 [INFO][3979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.117 [INFO][3979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.117 [INFO][3979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.127 [INFO][3979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" host="localhost" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.136 [INFO][3979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.141 [INFO][3979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.143 [INFO][3979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.145 [INFO][3979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.197840 containerd[1582]: 2025-09-09 00:31:56.145 [INFO][3979] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" host="localhost" Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.146 [INFO][3979] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085 Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.150 [INFO][3979] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" host="localhost" Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.155 [INFO][3979] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" host="localhost" Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.155 [INFO][3979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" host="localhost" Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.155 [INFO][3979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:56.198058 containerd[1582]: 2025-09-09 00:31:56.155 [INFO][3979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" HandleID="k8s-pod-network.e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Workload="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.198177 containerd[1582]: 2025-09-09 00:31:56.159 [INFO][3953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2dffbe49-c6eb-4f7e-b045-f374fec43167", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rx7rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06741bea4d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.198253 containerd[1582]: 2025-09-09 00:31:56.159 [INFO][3953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.198253 containerd[1582]: 2025-09-09 00:31:56.159 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06741bea4d3 ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.198253 containerd[1582]: 2025-09-09 00:31:56.167 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.198322 containerd[1582]: 2025-09-09 00:31:56.167 [INFO][3953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2dffbe49-c6eb-4f7e-b045-f374fec43167", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085", Pod:"coredns-674b8bbfcf-rx7rx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali06741bea4d3", MAC:"26:1e:f0:d3:33:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.198322 containerd[1582]: 2025-09-09 00:31:56.192 [INFO][3953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx7rx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rx7rx-eth0" Sep 9 00:31:56.312626 systemd-networkd[1506]: cali0335eec66f6: Link UP Sep 9 00:31:56.313680 systemd-networkd[1506]: cali0335eec66f6: Gained carrier Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.225 [INFO][3995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.237 [INFO][3995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6557f4c94b--2gvbb-eth0 whisker-6557f4c94b- calico-system 6c240194-abba-4587-aa33-c4b69c9430a2 921 0 2025-09-09 00:31:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6557f4c94b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6557f4c94b-2gvbb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0335eec66f6 [] [] }} ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.237 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.269 [INFO][4032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" HandleID="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Workload="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.269 [INFO][4032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" HandleID="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Workload="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d98f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6557f4c94b-2gvbb", "timestamp":"2025-09-09 00:31:56.26952111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.269 [INFO][4032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.269 [INFO][4032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.269 [INFO][4032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.276 [INFO][4032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.282 [INFO][4032] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.289 [INFO][4032] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.291 [INFO][4032] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.292 [INFO][4032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.292 [INFO][4032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.294 [INFO][4032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49 Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.297 [INFO][4032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" host="localhost" Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:56.325998 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" HandleID="k8s-pod-network.6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Workload="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.308 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6557f4c94b--2gvbb-eth0", GenerateName:"whisker-6557f4c94b-", Namespace:"calico-system", SelfLink:"", UID:"6c240194-abba-4587-aa33-c4b69c9430a2", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6557f4c94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6557f4c94b-2gvbb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0335eec66f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.308 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.308 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0335eec66f6 ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.313 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.314 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6557f4c94b--2gvbb-eth0", GenerateName:"whisker-6557f4c94b-", Namespace:"calico-system", SelfLink:"", UID:"6c240194-abba-4587-aa33-c4b69c9430a2", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6557f4c94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49", Pod:"whisker-6557f4c94b-2gvbb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0335eec66f6", MAC:"76:05:61:90:9f:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.326654 containerd[1582]: 2025-09-09 00:31:56.323 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" Namespace="calico-system" Pod="whisker-6557f4c94b-2gvbb" WorkloadEndpoint="localhost-k8s-whisker--6557f4c94b--2gvbb-eth0" Sep 9 00:31:56.335738 containerd[1582]: time="2025-09-09T00:31:56.335692790Z" level=info msg="connecting to shim e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085" address="unix:///run/containerd/s/2273ad14870f5123d6c1083545e8fee7481814fe2c7fc511c7126b7c7879247f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:56.359971 containerd[1582]: time="2025-09-09T00:31:56.359912424Z" level=info msg="connecting to shim 6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49" address="unix:///run/containerd/s/4f6801caf1d95d8cfa06c26348c9306c0b137c57a3dfd9acdaa9fe44d0d702c5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:56.368563 systemd[1]: Started cri-containerd-e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085.scope - libcontainer container e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085. Sep 9 00:31:56.382397 kubelet[2739]: I0909 00:31:56.381565 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d599c83-8c9e-4335-b7dd-600759b6c019" path="/var/lib/kubelet/pods/3d599c83-8c9e-4335-b7dd-600759b6c019/volumes" Sep 9 00:31:56.386218 containerd[1582]: time="2025-09-09T00:31:56.386144475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5985c77b98-2wnmf,Uid:2ea61d08-0343-4cf1-a232-a76b31169db1,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:56.386545 containerd[1582]: time="2025-09-09T00:31:56.386517468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-99755,Uid:0ff81a40-8102-48fe-98ba-e9315dead66d,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:31:56.396468 systemd[1]: Started cri-containerd-6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49.scope - libcontainer container 6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49. Sep 9 00:31:56.410155 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:56.442607 systemd-networkd[1506]: cali325e665f19e: Link UP Sep 9 00:31:56.444918 systemd-networkd[1506]: cali325e665f19e: Gained carrier Sep 9 00:31:56.446438 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:56.466434 containerd[1582]: time="2025-09-09T00:31:56.466024084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx7rx,Uid:2dffbe49-c6eb-4f7e-b045-f374fec43167,Namespace:kube-system,Attempt:0,} returns sandbox id \"e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085\"" Sep 9 00:31:56.467021 kubelet[2739]: E0909 00:31:56.466973 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:56.482323 containerd[1582]: time="2025-09-09T00:31:56.482280180Z" level=info msg="CreateContainer within sandbox \"e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.224 [INFO][3996] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.235 [INFO][3996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5ll8l-eth0 csi-node-driver- calico-system fd2d00d8-c926-49b1-9a33-424da0e8137a 726 0 2025-09-09 00:31:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5ll8l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali325e665f19e [] [] }} ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.235 [INFO][3996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.270 [INFO][4026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" HandleID="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Workload="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.271 [INFO][4026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" HandleID="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Workload="localhost-k8s-csi--node--driver--5ll8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5ll8l", "timestamp":"2025-09-09 00:31:56.270791376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.271 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.304 [INFO][4026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.379 [INFO][4026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.387 [INFO][4026] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.396 [INFO][4026] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.399 [INFO][4026] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.403 [INFO][4026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.403 [INFO][4026] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.406 [INFO][4026] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84 Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.415 [INFO][4026] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.425 [INFO][4026] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.425 [INFO][4026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" host="localhost" Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.425 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:56.485853 containerd[1582]: 2025-09-09 00:31:56.425 [INFO][4026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" HandleID="k8s-pod-network.189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Workload="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.435 [INFO][3996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5ll8l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd2d00d8-c926-49b1-9a33-424da0e8137a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5ll8l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali325e665f19e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.435 [INFO][3996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.436 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali325e665f19e ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.445 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.448 [INFO][3996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5ll8l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd2d00d8-c926-49b1-9a33-424da0e8137a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84", Pod:"csi-node-driver-5ll8l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali325e665f19e", MAC:"7e:1e:08:d6:67:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:56.486538 containerd[1582]: 2025-09-09 00:31:56.477 [INFO][3996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" Namespace="calico-system" Pod="csi-node-driver-5ll8l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5ll8l-eth0" Sep 9 00:31:56.688721 containerd[1582]: time="2025-09-09T00:31:56.688659265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6557f4c94b-2gvbb,Uid:6c240194-abba-4587-aa33-c4b69c9430a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49\"" Sep 9 00:31:56.690873 containerd[1582]: time="2025-09-09T00:31:56.690845298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:31:56.721857 systemd-networkd[1506]: cali52f83816fb3: Link UP Sep 9 00:31:56.722713 systemd-networkd[1506]: cali52f83816fb3: Gained carrier Sep 9 00:31:56.943806 containerd[1582]: time="2025-09-09T00:31:56.943556081Z" level=info msg="Container d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.439 [INFO][4144] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.459 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0 calico-apiserver-6ccf697dfd- calico-apiserver 0ff81a40-8102-48fe-98ba-e9315dead66d 848 0 2025-09-09 00:31:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ccf697dfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ccf697dfd-99755 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52f83816fb3 [] [] }} ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.459 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.502 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" HandleID="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.502 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" HandleID="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ccf697dfd-99755", "timestamp":"2025-09-09 00:31:56.502283218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.502 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.502 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.502 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.509 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.512 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.516 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.519 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.521 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.521 [INFO][4167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.522 [INFO][4167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601 Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.647 [INFO][4167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" host="localhost" Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:57.096876 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" HandleID="k8s-pod-network.a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:56.719 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0", GenerateName:"calico-apiserver-6ccf697dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff81a40-8102-48fe-98ba-e9315dead66d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf697dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ccf697dfd-99755", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52f83816fb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:56.719 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:56.719 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52f83816fb3 ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:56.721 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:56.724 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0", GenerateName:"calico-apiserver-6ccf697dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ff81a40-8102-48fe-98ba-e9315dead66d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf697dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601", Pod:"calico-apiserver-6ccf697dfd-99755", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52f83816fb3", MAC:"56:0b:2a:2d:7b:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.097535 containerd[1582]: 2025-09-09 00:31:57.094 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-99755" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--99755-eth0" Sep 9 00:31:57.358263 systemd-networkd[1506]: cali6c2f18e8926: Link UP Sep 9 00:31:57.360084 systemd-networkd[1506]: cali6c2f18e8926: Gained carrier Sep 9 00:31:57.374552 kubelet[2739]: E0909 00:31:57.374506 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:57.375035 containerd[1582]: time="2025-09-09T00:31:57.374882199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dndfz,Uid:5a95b016-00c9-4c01-bbca-50da884e7b1c,Namespace:calico-system,Attempt:0,}" Sep 9 00:31:57.375035 containerd[1582]: time="2025-09-09T00:31:57.374883812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmmvz,Uid:10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76,Namespace:kube-system,Attempt:0,}" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.465 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.486 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0 calico-kube-controllers-5985c77b98- calico-system 2ea61d08-0343-4cf1-a232-a76b31169db1 849 0 2025-09-09 00:31:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5985c77b98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5985c77b98-2wnmf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c2f18e8926 [] [] }} ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.486 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.524 [INFO][4186] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" HandleID="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Workload="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.524 [INFO][4186] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" HandleID="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Workload="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5985c77b98-2wnmf", "timestamp":"2025-09-09 00:31:56.524539261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.524 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:56.715 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.076 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.112 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.118 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.120 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.123 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.123 [INFO][4186] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.125 [INFO][4186] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.146 [INFO][4186] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.351 [INFO][4186] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.351 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" host="localhost" Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.351 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:57.442697 containerd[1582]: 2025-09-09 00:31:57.351 [INFO][4186] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" HandleID="k8s-pod-network.eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Workload="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.354 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0", GenerateName:"calico-kube-controllers-5985c77b98-", Namespace:"calico-system", SelfLink:"", UID:"2ea61d08-0343-4cf1-a232-a76b31169db1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5985c77b98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5985c77b98-2wnmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c2f18e8926", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.354 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.354 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c2f18e8926 ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.360 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.361 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0", GenerateName:"calico-kube-controllers-5985c77b98-", Namespace:"calico-system", SelfLink:"", UID:"2ea61d08-0343-4cf1-a232-a76b31169db1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5985c77b98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc", Pod:"calico-kube-controllers-5985c77b98-2wnmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c2f18e8926", MAC:"ca:47:cf:d7:55:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.443352 containerd[1582]: 2025-09-09 00:31:57.438 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" Namespace="calico-system" Pod="calico-kube-controllers-5985c77b98-2wnmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5985c77b98--2wnmf-eth0" Sep 9 00:31:57.573657 containerd[1582]: time="2025-09-09T00:31:57.573611922Z" level=info msg="CreateContainer within sandbox \"e30bcd8cba10b1b1f211f52069c068024642604154a0b473f9678627b8d3b085\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37\"" Sep 9 00:31:57.574294 containerd[1582]: time="2025-09-09T00:31:57.574260571Z" level=info msg="StartContainer for \"d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37\"" Sep 9 00:31:57.575268 containerd[1582]: time="2025-09-09T00:31:57.575228378Z" level=info msg="connecting to shim d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37" address="unix:///run/containerd/s/2273ad14870f5123d6c1083545e8fee7481814fe2c7fc511c7126b7c7879247f" protocol=ttrpc version=3 Sep 9 00:31:57.602637 systemd[1]: Started cri-containerd-d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37.scope - libcontainer container d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37. Sep 9 00:31:57.641746 containerd[1582]: time="2025-09-09T00:31:57.641594414Z" level=info msg="connecting to shim 189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84" address="unix:///run/containerd/s/aaa21026df4e31a026a19a09755431c8904eebbcce9a7cfea449ab16d8216116" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:57.652043 containerd[1582]: time="2025-09-09T00:31:57.651970867Z" level=info msg="connecting to shim a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601" address="unix:///run/containerd/s/e0b1496a0b64f516c44480817e8933dbcf1fe2e803f59aea6b6cf166e05debd1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:57.687993 containerd[1582]: time="2025-09-09T00:31:57.687768169Z" level=info msg="connecting to shim eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc" address="unix:///run/containerd/s/6002e98e9af04e450a330fa9fe26fbd293c9536a26f99a171c1404a25a99bb6b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:57.694493 systemd-networkd[1506]: cali06741bea4d3: Gained IPv6LL Sep 9 00:31:57.708722 systemd[1]: Started cri-containerd-a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601.scope - libcontainer container a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601. Sep 9 00:31:57.713042 containerd[1582]: time="2025-09-09T00:31:57.713003870Z" level=info msg="StartContainer for \"d4c52fcf6c0eabc8ddcb6f5fa8ac3f768a03eeac90f23192585537ef9b6d3c37\" returns successfully" Sep 9 00:31:57.717747 systemd[1]: Started cri-containerd-189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84.scope - libcontainer container 189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84. Sep 9 00:31:57.739256 systemd[1]: Started cri-containerd-eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc.scope - libcontainer container eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc. Sep 9 00:31:57.753304 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:57.788671 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:57.812418 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:57.818565 containerd[1582]: time="2025-09-09T00:31:57.818523518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5ll8l,Uid:fd2d00d8-c926-49b1-9a33-424da0e8137a,Namespace:calico-system,Attempt:0,} returns sandbox id \"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84\"" Sep 9 00:31:57.849309 systemd-networkd[1506]: cali5522355929e: Link UP Sep 9 00:31:57.852177 systemd-networkd[1506]: cali5522355929e: Gained carrier Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.644 [INFO][4249] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.672 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--dndfz-eth0 goldmane-54d579b49d- calico-system 5a95b016-00c9-4c01-bbca-50da884e7b1c 847 0 2025-09-09 00:31:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-dndfz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5522355929e [] [] }} ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.672 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.750 [INFO][4391] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" HandleID="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Workload="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.750 [INFO][4391] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" HandleID="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Workload="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-dndfz", "timestamp":"2025-09-09 00:31:57.750503945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.750 [INFO][4391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.751 [INFO][4391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.751 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.764 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.786 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.801 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.812 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.821 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.822 [INFO][4391] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.823 [INFO][4391] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112 Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.827 [INFO][4391] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.836 [INFO][4391] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.837 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" host="localhost" Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.837 [INFO][4391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:57.881647 containerd[1582]: 2025-09-09 00:31:57.837 [INFO][4391] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" HandleID="k8s-pod-network.141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Workload="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.840 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dndfz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5a95b016-00c9-4c01-bbca-50da884e7b1c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-dndfz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5522355929e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.840 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.840 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5522355929e ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.853 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.855 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--dndfz-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"5a95b016-00c9-4c01-bbca-50da884e7b1c", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112", Pod:"goldmane-54d579b49d-dndfz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5522355929e", MAC:"e6:59:6a:b5:ed:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.883158 containerd[1582]: 2025-09-09 00:31:57.872 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" Namespace="calico-system" Pod="goldmane-54d579b49d-dndfz" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--dndfz-eth0" Sep 9 00:31:57.904522 containerd[1582]: time="2025-09-09T00:31:57.904142552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-99755,Uid:0ff81a40-8102-48fe-98ba-e9315dead66d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601\"" Sep 9 00:31:57.915757 containerd[1582]: time="2025-09-09T00:31:57.915696329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5985c77b98-2wnmf,Uid:2ea61d08-0343-4cf1-a232-a76b31169db1,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc\"" Sep 9 00:31:57.927370 containerd[1582]: time="2025-09-09T00:31:57.926657215Z" level=info msg="connecting to shim 141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112" address="unix:///run/containerd/s/57fd14d5eb01095219b3cf5dbed79e3cd8b6e7972978cb2f58292ce3c9402284" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:57.932544 systemd-networkd[1506]: cali363c897f106: Link UP Sep 9 00:31:57.933255 systemd-networkd[1506]: cali363c897f106: Gained carrier Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.636 [INFO][4254] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.660 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0 coredns-674b8bbfcf- kube-system 10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76 841 0 2025-09-09 00:31:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lmmvz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali363c897f106 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.660 [INFO][4254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.759 [INFO][4337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" HandleID="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Workload="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.759 [INFO][4337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" HandleID="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Workload="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lmmvz", "timestamp":"2025-09-09 00:31:57.759146956 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.759 [INFO][4337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.837 [INFO][4337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.837 [INFO][4337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.869 [INFO][4337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.888 [INFO][4337] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.903 [INFO][4337] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.907 [INFO][4337] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.911 [INFO][4337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.911 [INFO][4337] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.913 [INFO][4337] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.917 [INFO][4337] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.925 [INFO][4337] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.925 [INFO][4337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" host="localhost" Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.925 [INFO][4337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:31:57.951828 containerd[1582]: 2025-09-09 00:31:57.925 [INFO][4337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" HandleID="k8s-pod-network.50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Workload="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.930 [INFO][4254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lmmvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363c897f106", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.930 [INFO][4254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.930 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali363c897f106 ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.933 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.933 [INFO][4254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c", Pod:"coredns-674b8bbfcf-lmmvz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali363c897f106", MAC:"7e:5d:32:49:a8:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:31:57.952905 containerd[1582]: 2025-09-09 00:31:57.942 [INFO][4254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" Namespace="kube-system" Pod="coredns-674b8bbfcf-lmmvz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lmmvz-eth0" Sep 9 00:31:57.957842 systemd[1]: Started cri-containerd-141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112.scope - libcontainer container 141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112. Sep 9 00:31:57.961989 kubelet[2739]: E0909 00:31:57.961332 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:57.979236 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:57.979798 kubelet[2739]: I0909 00:31:57.979673 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rx7rx" podStartSLOduration=43.979654755 podStartE2EDuration="43.979654755s" podCreationTimestamp="2025-09-09 00:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:31:57.979059471 +0000 UTC m=+47.901247655" watchObservedRunningTime="2025-09-09 00:31:57.979654755 +0000 UTC m=+47.901842939" Sep 9 00:31:57.990047 kubelet[2739]: I0909 00:31:57.989879 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:31:57.991218 containerd[1582]: time="2025-09-09T00:31:57.991174515Z" level=info msg="connecting to shim 50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c" address="unix:///run/containerd/s/3e4dd44a2201094672850bfde178e754c3500b1123bfdfc0257681cc5052e85f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:31:57.991395 kubelet[2739]: E0909 00:31:57.991260 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:58.037241 containerd[1582]: time="2025-09-09T00:31:58.037199137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-dndfz,Uid:5a95b016-00c9-4c01-bbca-50da884e7b1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112\"" Sep 9 00:31:58.044596 systemd[1]: Started cri-containerd-50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c.scope - libcontainer container 50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c. Sep 9 00:31:58.059444 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:31:58.272080 systemd-networkd[1506]: cali0335eec66f6: Gained IPv6LL Sep 9 00:31:58.462600 systemd-networkd[1506]: cali325e665f19e: Gained IPv6LL Sep 9 00:31:58.463033 systemd-networkd[1506]: cali6c2f18e8926: Gained IPv6LL Sep 9 00:31:58.559438 containerd[1582]: time="2025-09-09T00:31:58.559255625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lmmvz,Uid:10bfdd1d-9f60-4b7d-87e4-fd70f1d75d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c\"" Sep 9 00:31:58.561673 kubelet[2739]: E0909 00:31:58.561614 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:58.590619 systemd-networkd[1506]: cali52f83816fb3: Gained IPv6LL Sep 9 00:31:58.694535 containerd[1582]: time="2025-09-09T00:31:58.694477836Z" level=info msg="CreateContainer within sandbox \"50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:31:58.757470 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:33234.service - OpenSSH per-connection server daemon (10.0.0.1:33234). Sep 9 00:31:58.904264 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 33234 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:31:58.906532 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:58.915482 systemd-logind[1554]: New session 8 of user core. Sep 9 00:31:58.923638 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:31:58.979818 kubelet[2739]: E0909 00:31:58.979777 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:58.980550 kubelet[2739]: E0909 00:31:58.980417 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:31:59.166529 systemd-networkd[1506]: cali363c897f106: Gained IPv6LL Sep 9 00:31:59.294580 systemd-networkd[1506]: cali5522355929e: Gained IPv6LL Sep 9 00:31:59.375298 containerd[1582]: time="2025-09-09T00:31:59.375241295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-qb7sn,Uid:28ab3406-05ce-48cf-8ad2-a98587542055,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:31:59.554414 sshd[4609]: Connection closed by 10.0.0.1 port 33234 Sep 9 00:31:59.556413 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:59.561657 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:33234.service: Deactivated successfully. Sep 9 00:31:59.564830 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:31:59.566130 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:31:59.568096 systemd-logind[1554]: Removed session 8. Sep 9 00:31:59.581299 containerd[1582]: time="2025-09-09T00:31:59.581238063Z" level=info msg="Container 305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:31:59.784108 containerd[1582]: time="2025-09-09T00:31:59.784030077Z" level=info msg="CreateContainer within sandbox \"50f671a27ff0ab1ca318041f5039accfe109779e1d4865ba7cf00a529e032e2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7\"" Sep 9 00:31:59.785750 containerd[1582]: time="2025-09-09T00:31:59.785470275Z" level=info msg="StartContainer for \"305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7\"" Sep 9 00:31:59.786868 containerd[1582]: time="2025-09-09T00:31:59.786813446Z" level=info msg="connecting to shim 305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7" address="unix:///run/containerd/s/3e4dd44a2201094672850bfde178e754c3500b1123bfdfc0257681cc5052e85f" protocol=ttrpc version=3 Sep 9 00:31:59.830545 systemd[1]: Started cri-containerd-305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7.scope - libcontainer container 305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7. Sep 9 00:32:00.090878 containerd[1582]: time="2025-09-09T00:32:00.090739219Z" level=info msg="StartContainer for \"305b4e56f19c0fa404a4d9ea738cb1badff9f4cfc1eaccfe4f3da259a66a8bb7\" returns successfully" Sep 9 00:32:00.101229 systemd-networkd[1506]: calia728288a227: Link UP Sep 9 00:32:00.101584 systemd-networkd[1506]: calia728288a227: Gained carrier Sep 9 00:32:00.113376 systemd-networkd[1506]: vxlan.calico: Link UP Sep 9 00:32:00.113380 systemd-networkd[1506]: vxlan.calico: Gained carrier Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.806 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0 calico-apiserver-6ccf697dfd- calico-apiserver 28ab3406-05ce-48cf-8ad2-a98587542055 850 0 2025-09-09 00:31:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ccf697dfd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ccf697dfd-qb7sn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia728288a227 [] [] }} ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.807 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.844 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" HandleID="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.845 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" HandleID="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ccf697dfd-qb7sn", "timestamp":"2025-09-09 00:31:59.844713662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.845 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.845 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.845 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.853 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.859 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.864 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.866 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.868 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.868 [INFO][4670] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:31:59.870 [INFO][4670] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:32:00.045 [INFO][4670] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:32:00.089 [INFO][4670] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:32:00.090 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" host="localhost" Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:32:00.090 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:32:00.298460 containerd[1582]: 2025-09-09 00:32:00.090 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" HandleID="k8s-pod-network.d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Workload="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.097 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0", GenerateName:"calico-apiserver-6ccf697dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ab3406-05ce-48cf-8ad2-a98587542055", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf697dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ccf697dfd-qb7sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia728288a227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.097 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.097 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia728288a227 ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.102 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.121 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0", GenerateName:"calico-apiserver-6ccf697dfd-", Namespace:"calico-apiserver", SelfLink:"", UID:"28ab3406-05ce-48cf-8ad2-a98587542055", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 31, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf697dfd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d", Pod:"calico-apiserver-6ccf697dfd-qb7sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia728288a227", MAC:"f2:b0:ec:d6:54:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:32:00.299086 containerd[1582]: 2025-09-09 00:32:00.294 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf697dfd-qb7sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ccf697dfd--qb7sn-eth0" Sep 9 00:32:00.427087 containerd[1582]: time="2025-09-09T00:32:00.427042643Z" level=info msg="connecting to shim d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d" address="unix:///run/containerd/s/491155245b24fc2c44de77c37b9794d246d8c89b61f8a6633dc600ae96762bd1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:32:00.452519 systemd[1]: Started cri-containerd-d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d.scope - libcontainer container d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d. Sep 9 00:32:00.469461 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:32:00.600248 containerd[1582]: time="2025-09-09T00:32:00.600205832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf697dfd-qb7sn,Uid:28ab3406-05ce-48cf-8ad2-a98587542055,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d\"" Sep 9 00:32:00.953857 containerd[1582]: time="2025-09-09T00:32:00.953786532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:00.954544 containerd[1582]: time="2025-09-09T00:32:00.954502708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:32:00.955870 containerd[1582]: time="2025-09-09T00:32:00.955833271Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:00.958594 containerd[1582]: time="2025-09-09T00:32:00.958556900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:00.959197 containerd[1582]: time="2025-09-09T00:32:00.959164395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 4.268287505s" Sep 9 00:32:00.959232 containerd[1582]: time="2025-09-09T00:32:00.959200895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:32:00.960428 containerd[1582]: time="2025-09-09T00:32:00.960402980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:32:00.964809 containerd[1582]: time="2025-09-09T00:32:00.964778653Z" level=info msg="CreateContainer within sandbox \"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:32:00.973550 containerd[1582]: time="2025-09-09T00:32:00.973488220Z" level=info msg="Container 93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:00.981914 containerd[1582]: time="2025-09-09T00:32:00.981859361Z" level=info msg="CreateContainer within sandbox \"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9\"" Sep 9 00:32:00.982474 containerd[1582]: time="2025-09-09T00:32:00.982441657Z" level=info msg="StartContainer for \"93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9\"" Sep 9 00:32:00.983492 containerd[1582]: time="2025-09-09T00:32:00.983463213Z" level=info msg="connecting to shim 93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9" address="unix:///run/containerd/s/4f6801caf1d95d8cfa06c26348c9306c0b137c57a3dfd9acdaa9fe44d0d702c5" protocol=ttrpc version=3 Sep 9 00:32:01.004543 systemd[1]: Started cri-containerd-93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9.scope - libcontainer container 93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9. Sep 9 00:32:01.110532 containerd[1582]: time="2025-09-09T00:32:01.110479508Z" level=info msg="StartContainer for \"93e0f1cb77e2e0699aa16d5b9f7e9c32a548100c954025684c13d26b00ab78b9\" returns successfully" Sep 9 00:32:01.115551 kubelet[2739]: E0909 00:32:01.115283 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:01.127865 kubelet[2739]: I0909 00:32:01.127787 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lmmvz" podStartSLOduration=47.127764142 podStartE2EDuration="47.127764142s" podCreationTimestamp="2025-09-09 00:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:01.127472338 +0000 UTC m=+51.049660522" watchObservedRunningTime="2025-09-09 00:32:01.127764142 +0000 UTC m=+51.049952326" Sep 9 00:32:01.342563 systemd-networkd[1506]: calia728288a227: Gained IPv6LL Sep 9 00:32:01.726556 systemd-networkd[1506]: vxlan.calico: Gained IPv6LL Sep 9 00:32:02.117506 kubelet[2739]: E0909 00:32:02.117371 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:02.565484 containerd[1582]: time="2025-09-09T00:32:02.565426593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:02.566311 containerd[1582]: time="2025-09-09T00:32:02.566278407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:32:02.567624 containerd[1582]: time="2025-09-09T00:32:02.567585892Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:02.569570 containerd[1582]: time="2025-09-09T00:32:02.569533884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:02.570023 containerd[1582]: time="2025-09-09T00:32:02.569988592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.609554652s" Sep 9 00:32:02.570053 containerd[1582]: time="2025-09-09T00:32:02.570022267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:32:02.571214 containerd[1582]: time="2025-09-09T00:32:02.571174482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:32:02.576439 containerd[1582]: time="2025-09-09T00:32:02.576404812Z" level=info msg="CreateContainer within sandbox \"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:32:02.595064 containerd[1582]: time="2025-09-09T00:32:02.595010210Z" level=info msg="Container 3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:02.607820 containerd[1582]: time="2025-09-09T00:32:02.607446684Z" level=info msg="CreateContainer within sandbox \"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7\"" Sep 9 00:32:02.608689 containerd[1582]: time="2025-09-09T00:32:02.608638575Z" level=info msg="StartContainer for \"3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7\"" Sep 9 00:32:02.610930 containerd[1582]: time="2025-09-09T00:32:02.610906435Z" level=info msg="connecting to shim 3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7" address="unix:///run/containerd/s/aaa21026df4e31a026a19a09755431c8904eebbcce9a7cfea449ab16d8216116" protocol=ttrpc version=3 Sep 9 00:32:02.644502 systemd[1]: Started cri-containerd-3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7.scope - libcontainer container 3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7. Sep 9 00:32:02.726244 containerd[1582]: time="2025-09-09T00:32:02.726207764Z" level=info msg="StartContainer for \"3a1caaa56281229852b8d751d6ac573c6f06032b909ad3bb452befdaaa6373a7\" returns successfully" Sep 9 00:32:03.121789 kubelet[2739]: E0909 00:32:03.121750 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:03.666136 kubelet[2739]: I0909 00:32:03.666079 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:32:03.833792 containerd[1582]: time="2025-09-09T00:32:03.833741743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\" id:\"ff96197351ab5741f88816df4016caaf6978432e69cf59faf0dc513eade2d6d8\" pid:4928 exited_at:{seconds:1757377923 nanos:833386167}" Sep 9 00:32:03.922477 containerd[1582]: time="2025-09-09T00:32:03.922325506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\" id:\"494ff3d07301561681a8ec607ec715bb8dcc7b1bb6c9df88fe482a3103ac0cd9\" pid:4952 exited_at:{seconds:1757377923 nanos:922020006}" Sep 9 00:32:04.571036 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). Sep 9 00:32:04.691031 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:04.692556 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:04.696995 systemd-logind[1554]: New session 9 of user core. Sep 9 00:32:04.704507 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:32:04.920873 sshd[4969]: Connection closed by 10.0.0.1 port 41716 Sep 9 00:32:04.921283 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:04.926289 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:41716.service: Deactivated successfully. Sep 9 00:32:04.928542 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:32:04.929353 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:32:04.931161 systemd-logind[1554]: Removed session 9. Sep 9 00:32:07.405873 containerd[1582]: time="2025-09-09T00:32:07.405814418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:07.406586 containerd[1582]: time="2025-09-09T00:32:07.406562358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:32:07.407691 containerd[1582]: time="2025-09-09T00:32:07.407659188Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:07.409790 containerd[1582]: time="2025-09-09T00:32:07.409749119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:07.410434 containerd[1582]: time="2025-09-09T00:32:07.410400573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.839192154s" Sep 9 00:32:07.410494 containerd[1582]: time="2025-09-09T00:32:07.410438335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:32:07.411685 containerd[1582]: time="2025-09-09T00:32:07.411426186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:32:07.416999 containerd[1582]: time="2025-09-09T00:32:07.416947508Z" level=info msg="CreateContainer within sandbox \"a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:32:07.431526 containerd[1582]: time="2025-09-09T00:32:07.431491590Z" level=info msg="Container 628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:07.440399 containerd[1582]: time="2025-09-09T00:32:07.440365743Z" level=info msg="CreateContainer within sandbox \"a73d2efeec50af47df8e00aeca28a735ac762770381dba160f333204fdd4a601\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18\"" Sep 9 00:32:07.440887 containerd[1582]: time="2025-09-09T00:32:07.440854363Z" level=info msg="StartContainer for \"628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18\"" Sep 9 00:32:07.441997 containerd[1582]: time="2025-09-09T00:32:07.441973837Z" level=info msg="connecting to shim 628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18" address="unix:///run/containerd/s/e0b1496a0b64f516c44480817e8933dbcf1fe2e803f59aea6b6cf166e05debd1" protocol=ttrpc version=3 Sep 9 00:32:07.462494 systemd[1]: Started cri-containerd-628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18.scope - libcontainer container 628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18. Sep 9 00:32:07.510201 containerd[1582]: time="2025-09-09T00:32:07.510162846Z" level=info msg="StartContainer for \"628b18719488f1f671b25cbe0388c1a0b4f1c79b24f0d7e176a2a71768e1ef18\" returns successfully" Sep 9 00:32:08.179126 kubelet[2739]: I0909 00:32:08.179029 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ccf697dfd-99755" podStartSLOduration=27.674605073 podStartE2EDuration="37.179004005s" podCreationTimestamp="2025-09-09 00:31:31 +0000 UTC" firstStartedPulling="2025-09-09 00:31:57.906859851 +0000 UTC m=+47.829048035" lastFinishedPulling="2025-09-09 00:32:07.411258773 +0000 UTC m=+57.333446967" observedRunningTime="2025-09-09 00:32:08.145558181 +0000 UTC m=+58.067746365" watchObservedRunningTime="2025-09-09 00:32:08.179004005 +0000 UTC m=+58.101192189" Sep 9 00:32:09.938899 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:37644.service - OpenSSH per-connection server daemon (10.0.0.1:37644). Sep 9 00:32:10.003992 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 37644 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:10.005962 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:10.010288 systemd-logind[1554]: New session 10 of user core. Sep 9 00:32:10.026465 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:32:10.249072 sshd[5039]: Connection closed by 10.0.0.1 port 37644 Sep 9 00:32:10.251014 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:10.255724 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:37644.service: Deactivated successfully. Sep 9 00:32:10.261917 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:32:10.265850 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:32:10.267819 systemd-logind[1554]: Removed session 10. Sep 9 00:32:11.928973 containerd[1582]: time="2025-09-09T00:32:11.928901740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:11.929982 containerd[1582]: time="2025-09-09T00:32:11.929956844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:32:11.931493 containerd[1582]: time="2025-09-09T00:32:11.931463406Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:11.934493 containerd[1582]: time="2025-09-09T00:32:11.934421772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:11.935097 containerd[1582]: time="2025-09-09T00:32:11.935040299Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.52358072s" Sep 9 00:32:11.935097 containerd[1582]: time="2025-09-09T00:32:11.935089273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:32:11.936402 containerd[1582]: time="2025-09-09T00:32:11.936308654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:32:11.956199 containerd[1582]: time="2025-09-09T00:32:11.956158005Z" level=info msg="CreateContainer within sandbox \"eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:32:11.966289 containerd[1582]: time="2025-09-09T00:32:11.966258036Z" level=info msg="Container c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:11.976946 containerd[1582]: time="2025-09-09T00:32:11.976895499Z" level=info msg="CreateContainer within sandbox \"eb4a5a814a59cc4852c2d53d517b1e6c93dab6c2e2f5dc73997ff0a0995bdafc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\"" Sep 9 00:32:11.978356 containerd[1582]: time="2025-09-09T00:32:11.977757614Z" level=info msg="StartContainer for \"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\"" Sep 9 00:32:11.978816 containerd[1582]: time="2025-09-09T00:32:11.978783302Z" level=info msg="connecting to shim c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d" address="unix:///run/containerd/s/6002e98e9af04e450a330fa9fe26fbd293c9536a26f99a171c1404a25a99bb6b" protocol=ttrpc version=3 Sep 9 00:32:12.017742 systemd[1]: Started cri-containerd-c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d.scope - libcontainer container c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d. Sep 9 00:32:12.320843 containerd[1582]: time="2025-09-09T00:32:12.320678718Z" level=info msg="StartContainer for \"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\" returns successfully" Sep 9 00:32:13.373831 containerd[1582]: time="2025-09-09T00:32:13.373745267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\" id:\"3cdce64234b85ac003882fd0af6f5b93dcd01bd0359a2ad64628c1ed9031f9d6\" pid:5130 exited_at:{seconds:1757377933 nanos:373009087}" Sep 9 00:32:13.591173 kubelet[2739]: I0909 00:32:13.591096 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5985c77b98-2wnmf" podStartSLOduration=25.572660729 podStartE2EDuration="39.591079687s" podCreationTimestamp="2025-09-09 00:31:34 +0000 UTC" firstStartedPulling="2025-09-09 00:31:57.9175951 +0000 UTC m=+47.839783284" lastFinishedPulling="2025-09-09 00:32:11.936014058 +0000 UTC m=+61.858202242" observedRunningTime="2025-09-09 00:32:13.590034023 +0000 UTC m=+63.512222197" watchObservedRunningTime="2025-09-09 00:32:13.591079687 +0000 UTC m=+63.513267871" Sep 9 00:32:15.262813 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:37646.service - OpenSSH per-connection server daemon (10.0.0.1:37646). Sep 9 00:32:15.345949 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 37646 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:15.350459 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:15.364418 systemd-logind[1554]: New session 11 of user core. Sep 9 00:32:15.367664 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:32:15.548440 sshd[5145]: Connection closed by 10.0.0.1 port 37646 Sep 9 00:32:15.549487 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:15.566502 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:37654.service - OpenSSH per-connection server daemon (10.0.0.1:37654). Sep 9 00:32:15.938433 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:37646.service: Deactivated successfully. Sep 9 00:32:15.946132 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:32:15.947105 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:32:15.948534 systemd-logind[1554]: Removed session 11. Sep 9 00:32:15.984401 sshd[5156]: Accepted publickey for core from 10.0.0.1 port 37654 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:15.986084 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:15.991460 systemd-logind[1554]: New session 12 of user core. Sep 9 00:32:16.001474 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:32:16.246916 sshd[5161]: Connection closed by 10.0.0.1 port 37654 Sep 9 00:32:16.247197 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:16.256396 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:37654.service: Deactivated successfully. Sep 9 00:32:16.258541 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:32:16.259437 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:32:16.263266 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:37656.service - OpenSSH per-connection server daemon (10.0.0.1:37656). Sep 9 00:32:16.264166 systemd-logind[1554]: Removed session 12. Sep 9 00:32:16.317565 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 37656 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:16.319212 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:16.326721 systemd-logind[1554]: New session 13 of user core. Sep 9 00:32:16.338493 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:32:16.514240 sshd[5175]: Connection closed by 10.0.0.1 port 37656 Sep 9 00:32:16.514662 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:16.520722 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:37656.service: Deactivated successfully. Sep 9 00:32:16.525480 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:32:16.527966 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:32:16.529958 systemd-logind[1554]: Removed session 13. Sep 9 00:32:16.787166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682515513.mount: Deactivated successfully. Sep 9 00:32:17.281682 containerd[1582]: time="2025-09-09T00:32:17.281589227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:17.282453 containerd[1582]: time="2025-09-09T00:32:17.282356444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:32:17.283878 containerd[1582]: time="2025-09-09T00:32:17.283828780Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:17.288230 containerd[1582]: time="2025-09-09T00:32:17.288170354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:17.288793 containerd[1582]: time="2025-09-09T00:32:17.288747729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.352361307s" Sep 9 00:32:17.288793 containerd[1582]: time="2025-09-09T00:32:17.288778387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:32:17.290300 containerd[1582]: time="2025-09-09T00:32:17.289727443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:32:17.294676 containerd[1582]: time="2025-09-09T00:32:17.294639878Z" level=info msg="CreateContainer within sandbox \"141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:32:17.305137 containerd[1582]: time="2025-09-09T00:32:17.305097110Z" level=info msg="Container 0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:17.318782 containerd[1582]: time="2025-09-09T00:32:17.318739564Z" level=info msg="CreateContainer within sandbox \"141a15ca7c8b140a04c8445a9754377106045e6e3eb6dfc4f87de652148e2112\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\"" Sep 9 00:32:17.319323 containerd[1582]: time="2025-09-09T00:32:17.319286790Z" level=info msg="StartContainer for \"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\"" Sep 9 00:32:17.320931 containerd[1582]: time="2025-09-09T00:32:17.320566949Z" level=info msg="connecting to shim 0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa" address="unix:///run/containerd/s/57fd14d5eb01095219b3cf5dbed79e3cd8b6e7972978cb2f58292ce3c9402284" protocol=ttrpc version=3 Sep 9 00:32:17.351718 systemd[1]: Started cri-containerd-0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa.scope - libcontainer container 0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa. Sep 9 00:32:17.412456 containerd[1582]: time="2025-09-09T00:32:17.412280810Z" level=info msg="StartContainer for \"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" returns successfully" Sep 9 00:32:18.302409 containerd[1582]: time="2025-09-09T00:32:18.302184504Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.330037 containerd[1582]: time="2025-09-09T00:32:18.329945096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:32:18.331685 containerd[1582]: time="2025-09-09T00:32:18.331639875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.041885051s" Sep 9 00:32:18.331685 containerd[1582]: time="2025-09-09T00:32:18.331673861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:32:18.332720 containerd[1582]: time="2025-09-09T00:32:18.332667330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:32:18.529186 containerd[1582]: time="2025-09-09T00:32:18.528912504Z" level=info msg="CreateContainer within sandbox \"d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:32:18.640264 kubelet[2739]: I0909 00:32:18.639883 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-dndfz" podStartSLOduration=26.391275952 podStartE2EDuration="45.639852234s" podCreationTimestamp="2025-09-09 00:31:33 +0000 UTC" firstStartedPulling="2025-09-09 00:31:58.041007967 +0000 UTC m=+47.963196151" lastFinishedPulling="2025-09-09 00:32:17.289584218 +0000 UTC m=+67.211772433" observedRunningTime="2025-09-09 00:32:18.639414748 +0000 UTC m=+68.561602962" watchObservedRunningTime="2025-09-09 00:32:18.639852234 +0000 UTC m=+68.562040418" Sep 9 00:32:18.669280 kernel: hrtimer: interrupt took 2137146 ns Sep 9 00:32:18.681082 containerd[1582]: time="2025-09-09T00:32:18.679022929Z" level=info msg="Container 311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:18.711778 containerd[1582]: time="2025-09-09T00:32:18.711637120Z" level=info msg="CreateContainer within sandbox \"d4ea03315622582b23d09eedc2e6300bbd88a24fc9d8090d62f19110d0c3404d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae\"" Sep 9 00:32:18.712995 containerd[1582]: time="2025-09-09T00:32:18.712957414Z" level=info msg="StartContainer for \"311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae\"" Sep 9 00:32:18.716678 containerd[1582]: time="2025-09-09T00:32:18.716557576Z" level=info msg="connecting to shim 311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae" address="unix:///run/containerd/s/491155245b24fc2c44de77c37b9794d246d8c89b61f8a6633dc600ae96762bd1" protocol=ttrpc version=3 Sep 9 00:32:18.774409 containerd[1582]: time="2025-09-09T00:32:18.774314604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" id:\"1c93deadf09db46a145048048bee6c9f9c592de206aec01c2fc8873096af4fb9\" pid:5246 exit_status:1 exited_at:{seconds:1757377938 nanos:773679921}" Sep 9 00:32:18.783680 systemd[1]: Started cri-containerd-311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae.scope - libcontainer container 311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae. Sep 9 00:32:18.977328 containerd[1582]: time="2025-09-09T00:32:18.976843029Z" level=info msg="StartContainer for \"311042f869abedb015f564030f3477f44b0c1858137c3ff5aada7a27e08717ae\" returns successfully" Sep 9 00:32:19.546439 containerd[1582]: time="2025-09-09T00:32:19.544726883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" id:\"43ef6bd8c711455f23da938b33dfaa87f4684901515724526f82c9ea93e026c1\" pid:5308 exit_status:1 exited_at:{seconds:1757377939 nanos:542317940}" Sep 9 00:32:20.466828 containerd[1582]: time="2025-09-09T00:32:20.466612599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" id:\"8c5aebe7e6e943e2eeda01386edefee86f38ca5628d1d8770f321c5f05259634\" pid:5333 exit_status:1 exited_at:{seconds:1757377940 nanos:466275095}" Sep 9 00:32:21.224569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571142021.mount: Deactivated successfully. Sep 9 00:32:21.358312 kubelet[2739]: I0909 00:32:21.358258 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:32:21.527752 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:44498.service - OpenSSH per-connection server daemon (10.0.0.1:44498). Sep 9 00:32:21.705160 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 44498 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:21.707135 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:21.711999 systemd-logind[1554]: New session 14 of user core. Sep 9 00:32:21.721492 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:32:21.977513 sshd[5362]: Connection closed by 10.0.0.1 port 44498 Sep 9 00:32:21.977884 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:21.982687 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:44498.service: Deactivated successfully. Sep 9 00:32:21.984960 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:32:21.985784 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:32:21.987138 systemd-logind[1554]: Removed session 14. Sep 9 00:32:23.543805 containerd[1582]: time="2025-09-09T00:32:23.543714755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.610578 containerd[1582]: time="2025-09-09T00:32:23.610503774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:32:23.669143 containerd[1582]: time="2025-09-09T00:32:23.669076219Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.707076 containerd[1582]: time="2025-09-09T00:32:23.707005780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.707660 containerd[1582]: time="2025-09-09T00:32:23.707627445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.374912445s" Sep 9 00:32:23.707660 containerd[1582]: time="2025-09-09T00:32:23.707655980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:32:23.708578 containerd[1582]: time="2025-09-09T00:32:23.708554604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:32:23.830100 containerd[1582]: time="2025-09-09T00:32:23.829909407Z" level=info msg="CreateContainer within sandbox \"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:32:24.062952 containerd[1582]: time="2025-09-09T00:32:24.062888084Z" level=info msg="Container aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:24.127416 containerd[1582]: time="2025-09-09T00:32:24.127217401Z" level=info msg="CreateContainer within sandbox \"6314c9ff0cb363c835c97fbc44f8a00cfbd87485006ba1c528222996fbcc4e49\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05\"" Sep 9 00:32:24.128620 containerd[1582]: time="2025-09-09T00:32:24.128584708Z" level=info msg="StartContainer for \"aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05\"" Sep 9 00:32:24.131312 containerd[1582]: time="2025-09-09T00:32:24.131273996Z" level=info msg="connecting to shim aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05" address="unix:///run/containerd/s/4f6801caf1d95d8cfa06c26348c9306c0b137c57a3dfd9acdaa9fe44d0d702c5" protocol=ttrpc version=3 Sep 9 00:32:24.163538 systemd[1]: Started cri-containerd-aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05.scope - libcontainer container aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05. Sep 9 00:32:24.237795 containerd[1582]: time="2025-09-09T00:32:24.237745056Z" level=info msg="StartContainer for \"aa1e695e9ddd34e83140a83509f6b713c62c058866b12dee4c819efbe8a84a05\" returns successfully" Sep 9 00:32:24.383645 kubelet[2739]: I0909 00:32:24.383415 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6557f4c94b-2gvbb" podStartSLOduration=2.365457347 podStartE2EDuration="29.383392324s" podCreationTimestamp="2025-09-09 00:31:55 +0000 UTC" firstStartedPulling="2025-09-09 00:31:56.690525067 +0000 UTC m=+46.612713251" lastFinishedPulling="2025-09-09 00:32:23.708460044 +0000 UTC m=+73.630648228" observedRunningTime="2025-09-09 00:32:24.382799162 +0000 UTC m=+74.304987346" watchObservedRunningTime="2025-09-09 00:32:24.383392324 +0000 UTC m=+74.305580508" Sep 9 00:32:24.384254 kubelet[2739]: I0909 00:32:24.383671 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ccf697dfd-qb7sn" podStartSLOduration=35.652516971 podStartE2EDuration="53.383663611s" podCreationTimestamp="2025-09-09 00:31:31 +0000 UTC" firstStartedPulling="2025-09-09 00:32:00.601297453 +0000 UTC m=+50.523485637" lastFinishedPulling="2025-09-09 00:32:18.332444093 +0000 UTC m=+68.254632277" observedRunningTime="2025-09-09 00:32:19.390152364 +0000 UTC m=+69.312340578" watchObservedRunningTime="2025-09-09 00:32:24.383663611 +0000 UTC m=+74.305851795" Sep 9 00:32:25.503546 containerd[1582]: time="2025-09-09T00:32:25.503457069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:25.510677 containerd[1582]: time="2025-09-09T00:32:25.510634770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:32:25.730904 containerd[1582]: time="2025-09-09T00:32:25.730840805Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:25.902907 containerd[1582]: time="2025-09-09T00:32:25.902739798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:25.903518 containerd[1582]: time="2025-09-09T00:32:25.903487283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.194851694s" Sep 9 00:32:25.903705 containerd[1582]: time="2025-09-09T00:32:25.903526137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:32:25.929365 containerd[1582]: time="2025-09-09T00:32:25.928912092Z" level=info msg="CreateContainer within sandbox \"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:32:26.063497 containerd[1582]: time="2025-09-09T00:32:26.063439410Z" level=info msg="Container cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:32:26.144802 containerd[1582]: time="2025-09-09T00:32:26.144323844Z" level=info msg="CreateContainer within sandbox \"189a5841a1be5786297720d03c04912561e544a75899c3389f02ac322545bc84\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e\"" Sep 9 00:32:26.145431 containerd[1582]: time="2025-09-09T00:32:26.145392410Z" level=info msg="StartContainer for \"cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e\"" Sep 9 00:32:26.146971 containerd[1582]: time="2025-09-09T00:32:26.146946941Z" level=info msg="connecting to shim cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e" address="unix:///run/containerd/s/aaa21026df4e31a026a19a09755431c8904eebbcce9a7cfea449ab16d8216116" protocol=ttrpc version=3 Sep 9 00:32:26.173598 systemd[1]: Started cri-containerd-cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e.scope - libcontainer container cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e. Sep 9 00:32:26.318064 containerd[1582]: time="2025-09-09T00:32:26.318013343Z" level=info msg="StartContainer for \"cce2b2d06eedb27f3f5d1b9da55deb868b24d68efa31e629c96db1fed0c9446e\" returns successfully" Sep 9 00:32:26.475935 kubelet[2739]: I0909 00:32:26.475544 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5ll8l" podStartSLOduration=24.392291759 podStartE2EDuration="52.475523909s" podCreationTimestamp="2025-09-09 00:31:34 +0000 UTC" firstStartedPulling="2025-09-09 00:31:57.821347224 +0000 UTC m=+47.743535408" lastFinishedPulling="2025-09-09 00:32:25.904579374 +0000 UTC m=+75.826767558" observedRunningTime="2025-09-09 00:32:26.475052821 +0000 UTC m=+76.397240995" watchObservedRunningTime="2025-09-09 00:32:26.475523909 +0000 UTC m=+76.397712093" Sep 9 00:32:27.000516 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:44512.service - OpenSSH per-connection server daemon (10.0.0.1:44512). Sep 9 00:32:27.078460 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 44512 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:27.080668 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:27.087993 systemd-logind[1554]: New session 15 of user core. Sep 9 00:32:27.092598 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:32:27.117577 kubelet[2739]: I0909 00:32:27.117536 2739 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:32:27.119033 kubelet[2739]: I0909 00:32:27.119011 2739 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:32:27.302688 sshd[5462]: Connection closed by 10.0.0.1 port 44512 Sep 9 00:32:27.303362 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:27.310812 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:44512.service: Deactivated successfully. Sep 9 00:32:27.313151 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:32:27.314249 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:32:27.316246 systemd-logind[1554]: Removed session 15. Sep 9 00:32:27.374615 kubelet[2739]: E0909 00:32:27.374564 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.319318 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:54354.service - OpenSSH per-connection server daemon (10.0.0.1:54354). Sep 9 00:32:32.362502 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 54354 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:32.363901 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:32.368387 systemd-logind[1554]: New session 16 of user core. Sep 9 00:32:32.377478 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:32:32.495737 sshd[5480]: Connection closed by 10.0.0.1 port 54354 Sep 9 00:32:32.496147 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:32.499864 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:54354.service: Deactivated successfully. Sep 9 00:32:32.503049 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:32:32.504684 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:32:32.506417 systemd-logind[1554]: Removed session 16. Sep 9 00:32:33.374668 kubelet[2739]: E0909 00:32:33.374606 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:33.926921 containerd[1582]: time="2025-09-09T00:32:33.926864292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\" id:\"d9db20748b872a40a263bc74dffe8b04ee2cbe1a8e53a4d470375f9d871e123e\" pid:5505 exited_at:{seconds:1757377953 nanos:926499028}" Sep 9 00:32:37.514607 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:54360.service - OpenSSH per-connection server daemon (10.0.0.1:54360). Sep 9 00:32:37.605719 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 54360 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:37.607876 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:37.613389 systemd-logind[1554]: New session 17 of user core. Sep 9 00:32:37.624665 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:32:37.802933 sshd[5522]: Connection closed by 10.0.0.1 port 54360 Sep 9 00:32:37.803223 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:37.809750 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:54360.service: Deactivated successfully. Sep 9 00:32:37.812489 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:32:37.813565 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:32:37.815028 systemd-logind[1554]: Removed session 17. Sep 9 00:32:40.375220 kubelet[2739]: E0909 00:32:40.375172 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:42.822397 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:45956.service - OpenSSH per-connection server daemon (10.0.0.1:45956). Sep 9 00:32:42.882790 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 45956 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:42.884874 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:42.890036 systemd-logind[1554]: New session 18 of user core. Sep 9 00:32:42.896481 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:32:43.006957 sshd[5545]: Connection closed by 10.0.0.1 port 45956 Sep 9 00:32:43.007463 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:43.020469 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:45956.service: Deactivated successfully. Sep 9 00:32:43.022523 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:32:43.024328 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:32:43.026599 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964). Sep 9 00:32:43.027868 systemd-logind[1554]: Removed session 18. Sep 9 00:32:43.086392 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:43.088108 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:43.093300 systemd-logind[1554]: New session 19 of user core. Sep 9 00:32:43.103532 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:32:43.371866 containerd[1582]: time="2025-09-09T00:32:43.371733416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\" id:\"8636a84431865de3187d64e83a457ee56cf3e985381c8ed22039f5cf16b7068c\" pid:5578 exited_at:{seconds:1757377963 nanos:371409612}" Sep 9 00:32:43.699223 sshd[5560]: Connection closed by 10.0.0.1 port 45964 Sep 9 00:32:43.699709 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:43.712697 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:45964.service: Deactivated successfully. Sep 9 00:32:43.714739 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:32:43.715703 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:32:43.719130 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:45974.service - OpenSSH per-connection server daemon (10.0.0.1:45974). Sep 9 00:32:43.720054 systemd-logind[1554]: Removed session 19. Sep 9 00:32:43.784747 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 45974 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:43.786580 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:43.791075 systemd-logind[1554]: New session 20 of user core. Sep 9 00:32:43.804479 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:32:44.378242 kubelet[2739]: E0909 00:32:44.378193 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:44.409592 sshd[5596]: Connection closed by 10.0.0.1 port 45974 Sep 9 00:32:44.410859 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:44.420788 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:45974.service: Deactivated successfully. Sep 9 00:32:44.423696 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:32:44.424726 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:32:44.428479 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Sep 9 00:32:44.429646 systemd-logind[1554]: Removed session 20. Sep 9 00:32:44.476102 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:44.478060 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:44.485581 systemd-logind[1554]: New session 21 of user core. Sep 9 00:32:44.493588 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:32:44.842683 sshd[5617]: Connection closed by 10.0.0.1 port 45980 Sep 9 00:32:44.843026 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:44.855431 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:45980.service: Deactivated successfully. Sep 9 00:32:44.858233 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:32:44.859246 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:32:44.865034 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:45992.service - OpenSSH per-connection server daemon (10.0.0.1:45992). Sep 9 00:32:44.866029 systemd-logind[1554]: Removed session 21. Sep 9 00:32:44.916410 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 45992 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:44.918072 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:44.923099 systemd-logind[1554]: New session 22 of user core. Sep 9 00:32:44.930565 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:32:45.131775 sshd[5632]: Connection closed by 10.0.0.1 port 45992 Sep 9 00:32:45.132583 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:45.137301 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:45992.service: Deactivated successfully. Sep 9 00:32:45.139916 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:32:45.140936 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:32:45.142796 systemd-logind[1554]: Removed session 22. Sep 9 00:32:50.151999 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:42320.service - OpenSSH per-connection server daemon (10.0.0.1:42320). Sep 9 00:32:50.207044 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 42320 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:50.208760 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:50.213057 systemd-logind[1554]: New session 23 of user core. Sep 9 00:32:50.222487 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:32:50.352629 sshd[5647]: Connection closed by 10.0.0.1 port 42320 Sep 9 00:32:50.353006 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:50.357545 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:42320.service: Deactivated successfully. Sep 9 00:32:50.360060 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:32:50.361026 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:32:50.363216 systemd-logind[1554]: Removed session 23. Sep 9 00:32:50.444813 containerd[1582]: time="2025-09-09T00:32:50.444765556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" id:\"957c718dfacc7738aece00ac4b9dd133d96cc5b3ca9d360d98afd4b1cada7980\" pid:5671 exited_at:{seconds:1757377970 nanos:444383553}" Sep 9 00:32:52.202958 containerd[1582]: time="2025-09-09T00:32:52.202900612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c2c64a42786d1f76a01ff274c6a1f684058013a41e1670a25f07bb011ca50fa\" id:\"c922a059a1fe58bc9789b449dfa499bda239808e09c40ed8d8b16447343ad37a\" pid:5697 exited_at:{seconds:1757377972 nanos:202537845}" Sep 9 00:32:55.379055 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:42332.service - OpenSSH per-connection server daemon (10.0.0.1:42332). Sep 9 00:32:55.430487 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 42332 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:32:55.432220 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:32:55.436529 systemd-logind[1554]: New session 24 of user core. Sep 9 00:32:55.443486 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:32:55.564672 sshd[5711]: Connection closed by 10.0.0.1 port 42332 Sep 9 00:32:55.565019 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:55.568968 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:42332.service: Deactivated successfully. Sep 9 00:32:55.571418 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:32:55.573708 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:32:55.575481 systemd-logind[1554]: Removed session 24. Sep 9 00:33:00.582973 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Sep 9 00:33:00.638897 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:33:00.640518 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:00.645143 systemd-logind[1554]: New session 25 of user core. Sep 9 00:33:00.654458 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:33:00.797478 sshd[5729]: Connection closed by 10.0.0.1 port 51946 Sep 9 00:33:00.797822 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:00.802668 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:51946.service: Deactivated successfully. Sep 9 00:33:00.805065 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:33:00.805878 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:33:00.808835 systemd-logind[1554]: Removed session 25. Sep 9 00:33:03.949380 containerd[1582]: time="2025-09-09T00:33:03.949311321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f07414bf926a638b6e16ee10b93ab7fa162b9ff93933ce4294e4af4f3de0117f\" id:\"eaae988dc1a42d6ade36991b579392dad51b7c7f966f54dd632376bdc7197ad4\" pid:5755 exited_at:{seconds:1757377983 nanos:932649216}" Sep 9 00:33:05.814645 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:51956.service - OpenSSH per-connection server daemon (10.0.0.1:51956). Sep 9 00:33:05.905789 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 51956 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:33:05.908246 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:05.914544 systemd-logind[1554]: New session 26 of user core. Sep 9 00:33:05.922541 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:33:06.125643 sshd[5770]: Connection closed by 10.0.0.1 port 51956 Sep 9 00:33:06.126214 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:06.131765 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:51956.service: Deactivated successfully. Sep 9 00:33:06.135076 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:33:06.136858 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:33:06.138455 systemd-logind[1554]: Removed session 26. Sep 9 00:33:09.188246 containerd[1582]: time="2025-09-09T00:33:09.188121324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c988e133f556e99b296423617b8bda7edf16d77768b2777c303203cc4598970d\" id:\"c74d9e748229e288fb90cedebd6d7dd4935766cef236ae1ebc5a64856bf0f26c\" pid:5795 exited_at:{seconds:1757377989 nanos:187732780}" Sep 9 00:33:11.139763 systemd[1]: Started sshd@26-10.0.0.142:22-10.0.0.1:36070.service - OpenSSH per-connection server daemon (10.0.0.1:36070). Sep 9 00:33:11.194237 sshd[5808]: Accepted publickey for core from 10.0.0.1 port 36070 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:33:11.196009 sshd-session[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:11.201174 systemd-logind[1554]: New session 27 of user core. Sep 9 00:33:11.210510 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:33:11.364628 sshd[5810]: Connection closed by 10.0.0.1 port 36070 Sep 9 00:33:11.364985 sshd-session[5808]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:11.369310 systemd[1]: sshd@26-10.0.0.142:22-10.0.0.1:36070.service: Deactivated successfully. Sep 9 00:33:11.371631 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:33:11.372605 systemd-logind[1554]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:33:11.374207 systemd-logind[1554]: Removed session 27.