Jul 9 09:28:38.943017 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 8 08:29:03 -00 2025 Jul 9 09:28:38.943040 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 9 09:28:38.943050 kernel: BIOS-provided physical RAM map: Jul 9 09:28:38.943060 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 09:28:38.943068 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 09:28:38.943075 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 09:28:38.943084 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 9 09:28:38.943092 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 9 09:28:38.943099 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 09:28:38.943107 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 09:28:38.943115 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 9 09:28:38.943123 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 09:28:38.943132 kernel: NX (Execute Disable) protection: active Jul 9 09:28:38.943140 kernel: APIC: Static calls initialized Jul 9 09:28:38.943149 kernel: SMBIOS 3.0.0 present. Jul 9 09:28:38.943158 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 9 09:28:38.943166 kernel: DMI: Memory slots populated: 1/1 Jul 9 09:28:38.943175 kernel: Hypervisor detected: KVM Jul 9 09:28:38.943183 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 09:28:38.943191 kernel: kvm-clock: using sched offset of 4815820786 cycles Jul 9 09:28:38.943200 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 09:28:38.943209 kernel: tsc: Detected 1996.249 MHz processor Jul 9 09:28:38.943217 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 09:28:38.943226 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 09:28:38.943234 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 9 09:28:38.943243 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 09:28:38.943253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 09:28:38.943261 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 9 09:28:38.943269 kernel: ACPI: Early table checksum verification disabled Jul 9 09:28:38.943277 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 9 09:28:38.943286 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:28:38.943294 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:28:38.943302 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:28:38.943311 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 9 09:28:38.943319 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:28:38.943329 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:28:38.943337 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 9 09:28:38.943345 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 9 09:28:38.943354 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 9 09:28:38.943362 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 9 09:28:38.943373 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 9 09:28:38.943382 kernel: No NUMA configuration found Jul 9 09:28:38.943392 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 9 09:28:38.943401 kernel: NODE_DATA(0) allocated [mem 0x13fff8dc0-0x13fffffff] Jul 9 09:28:38.943409 kernel: Zone ranges: Jul 9 09:28:38.943418 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 09:28:38.943427 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 9 09:28:38.943435 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 9 09:28:38.943444 kernel: Device empty Jul 9 09:28:38.943452 kernel: Movable zone start for each node Jul 9 09:28:38.943462 kernel: Early memory node ranges Jul 9 09:28:38.943471 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 09:28:38.943480 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 9 09:28:38.943488 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 9 09:28:38.943497 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 9 09:28:38.943505 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 09:28:38.943514 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 09:28:38.943523 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 9 09:28:38.943531 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 09:28:38.943542 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 09:28:38.943550 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 09:28:38.943559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 09:28:38.943568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 09:28:38.943577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 09:28:38.943585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 09:28:38.943594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 09:28:38.943602 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 09:28:38.943611 kernel: CPU topo: Max. logical packages: 2 Jul 9 09:28:38.943621 kernel: CPU topo: Max. logical dies: 2 Jul 9 09:28:38.945662 kernel: CPU topo: Max. dies per package: 1 Jul 9 09:28:38.945672 kernel: CPU topo: Max. threads per core: 1 Jul 9 09:28:38.945681 kernel: CPU topo: Num. cores per package: 1 Jul 9 09:28:38.945690 kernel: CPU topo: Num. threads per package: 1 Jul 9 09:28:38.945699 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 9 09:28:38.945708 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 09:28:38.945717 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 9 09:28:38.945725 kernel: Booting paravirtualized kernel on KVM Jul 9 09:28:38.945737 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 09:28:38.945746 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 9 09:28:38.945755 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 9 09:28:38.945764 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 9 09:28:38.945772 kernel: pcpu-alloc: [0] 0 1 Jul 9 09:28:38.945781 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 9 09:28:38.945791 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 9 09:28:38.945800 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 09:28:38.945811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 09:28:38.945819 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 09:28:38.945828 kernel: Fallback order for Node 0: 0 Jul 9 09:28:38.945837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 9 09:28:38.945845 kernel: Policy zone: Normal Jul 9 09:28:38.945854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 09:28:38.945863 kernel: software IO TLB: area num 2. Jul 9 09:28:38.945871 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 09:28:38.945880 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 09:28:38.945890 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 09:28:38.945899 kernel: Dynamic Preempt: voluntary Jul 9 09:28:38.945907 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 09:28:38.945917 kernel: rcu: RCU event tracing is enabled. Jul 9 09:28:38.945926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 09:28:38.945935 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 09:28:38.945944 kernel: Rude variant of Tasks RCU enabled. Jul 9 09:28:38.945952 kernel: Tracing variant of Tasks RCU enabled. Jul 9 09:28:38.945961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 09:28:38.945971 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 09:28:38.945980 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 09:28:38.945989 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 09:28:38.945998 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 09:28:38.946006 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 9 09:28:38.946015 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 09:28:38.946024 kernel: Console: colour VGA+ 80x25 Jul 9 09:28:38.946032 kernel: printk: legacy console [tty0] enabled Jul 9 09:28:38.946041 kernel: printk: legacy console [ttyS0] enabled Jul 9 09:28:38.946051 kernel: ACPI: Core revision 20240827 Jul 9 09:28:38.946059 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 09:28:38.946068 kernel: x2apic enabled Jul 9 09:28:38.946076 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 09:28:38.946085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 09:28:38.946094 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 9 09:28:38.946108 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 9 09:28:38.946118 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 9 09:28:38.946127 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 9 09:28:38.946136 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 09:28:38.946145 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 09:28:38.946154 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 09:28:38.946165 kernel: Speculative Store Bypass: Vulnerable Jul 9 09:28:38.946174 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 9 09:28:38.946183 kernel: Freeing SMP alternatives memory: 32K Jul 9 09:28:38.946192 kernel: pid_max: default: 32768 minimum: 301 Jul 9 09:28:38.946201 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 09:28:38.946211 kernel: landlock: Up and running. Jul 9 09:28:38.946221 kernel: SELinux: Initializing. Jul 9 09:28:38.946230 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:28:38.946239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:28:38.946248 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 9 09:28:38.946257 kernel: Performance Events: AMD PMU driver. Jul 9 09:28:38.946266 kernel: ... version: 0 Jul 9 09:28:38.946275 kernel: ... bit width: 48 Jul 9 09:28:38.946284 kernel: ... generic registers: 4 Jul 9 09:28:38.946294 kernel: ... value mask: 0000ffffffffffff Jul 9 09:28:38.946303 kernel: ... max period: 00007fffffffffff Jul 9 09:28:38.946312 kernel: ... fixed-purpose events: 0 Jul 9 09:28:38.946321 kernel: ... event mask: 000000000000000f Jul 9 09:28:38.946330 kernel: signal: max sigframe size: 1440 Jul 9 09:28:38.946339 kernel: rcu: Hierarchical SRCU implementation. Jul 9 09:28:38.946348 kernel: rcu: Max phase no-delay instances is 400. Jul 9 09:28:38.946357 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 09:28:38.946366 kernel: smp: Bringing up secondary CPUs ... Jul 9 09:28:38.946376 kernel: smpboot: x86: Booting SMP configuration: Jul 9 09:28:38.946385 kernel: .... node #0, CPUs: #1 Jul 9 09:28:38.946394 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 09:28:38.946403 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 9 09:28:38.946413 kernel: Memory: 3962040K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54592K init, 2376K bss, 227284K reserved, 0K cma-reserved) Jul 9 09:28:38.946422 kernel: devtmpfs: initialized Jul 9 09:28:38.946431 kernel: x86/mm: Memory block size: 128MB Jul 9 09:28:38.946440 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 09:28:38.946449 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 09:28:38.946460 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 09:28:38.946469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 09:28:38.946478 kernel: audit: initializing netlink subsys (disabled) Jul 9 09:28:38.946487 kernel: audit: type=2000 audit(1752053314.976:1): state=initialized audit_enabled=0 res=1 Jul 9 09:28:38.946496 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 09:28:38.946505 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 09:28:38.946514 kernel: cpuidle: using governor menu Jul 9 09:28:38.946523 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 09:28:38.946532 kernel: dca service started, version 1.12.1 Jul 9 09:28:38.946544 kernel: PCI: Using configuration type 1 for base access Jul 9 09:28:38.946553 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 09:28:38.946562 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 09:28:38.946571 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 09:28:38.946580 kernel: ACPI: Added _OSI(Module Device) Jul 9 09:28:38.946589 kernel: ACPI: Added _OSI(Processor Device) Jul 9 09:28:38.946599 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 09:28:38.946608 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 09:28:38.946617 kernel: ACPI: Interpreter enabled Jul 9 09:28:38.946640 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 09:28:38.946650 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 09:28:38.946659 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 09:28:38.946668 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 09:28:38.946677 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 9 09:28:38.946686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 09:28:38.946828 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 9 09:28:38.946921 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 9 09:28:38.947011 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 9 09:28:38.947025 kernel: acpiphp: Slot [3] registered Jul 9 09:28:38.947034 kernel: acpiphp: Slot [4] registered Jul 9 09:28:38.947044 kernel: acpiphp: Slot [5] registered Jul 9 09:28:38.947053 kernel: acpiphp: Slot [6] registered Jul 9 09:28:38.947062 kernel: acpiphp: Slot [7] registered Jul 9 09:28:38.947070 kernel: acpiphp: Slot [8] registered Jul 9 09:28:38.947079 kernel: acpiphp: Slot [9] registered Jul 9 09:28:38.947091 kernel: acpiphp: Slot [10] registered Jul 9 09:28:38.947100 kernel: acpiphp: Slot [11] registered Jul 9 09:28:38.947109 kernel: acpiphp: Slot [12] registered Jul 9 09:28:38.947118 kernel: acpiphp: Slot [13] registered Jul 9 09:28:38.947127 kernel: acpiphp: Slot [14] registered Jul 9 09:28:38.947135 kernel: acpiphp: Slot [15] registered Jul 9 09:28:38.947144 kernel: acpiphp: Slot [16] registered Jul 9 09:28:38.947153 kernel: acpiphp: Slot [17] registered Jul 9 09:28:38.947162 kernel: acpiphp: Slot [18] registered Jul 9 09:28:38.947171 kernel: acpiphp: Slot [19] registered Jul 9 09:28:38.947182 kernel: acpiphp: Slot [20] registered Jul 9 09:28:38.947191 kernel: acpiphp: Slot [21] registered Jul 9 09:28:38.947199 kernel: acpiphp: Slot [22] registered Jul 9 09:28:38.947208 kernel: acpiphp: Slot [23] registered Jul 9 09:28:38.947217 kernel: acpiphp: Slot [24] registered Jul 9 09:28:38.947226 kernel: acpiphp: Slot [25] registered Jul 9 09:28:38.947235 kernel: acpiphp: Slot [26] registered Jul 9 09:28:38.947244 kernel: acpiphp: Slot [27] registered Jul 9 09:28:38.947253 kernel: acpiphp: Slot [28] registered Jul 9 09:28:38.947263 kernel: acpiphp: Slot [29] registered Jul 9 09:28:38.947272 kernel: acpiphp: Slot [30] registered Jul 9 09:28:38.947281 kernel: acpiphp: Slot [31] registered Jul 9 09:28:38.947289 kernel: PCI host bridge to bus 0000:00 Jul 9 09:28:38.947378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 09:28:38.947456 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 09:28:38.947534 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 09:28:38.947609 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 09:28:38.952866 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 9 09:28:38.952948 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 09:28:38.953056 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 9 09:28:38.953159 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 9 09:28:38.953255 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 9 09:28:38.953344 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 9 09:28:38.953436 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 9 09:28:38.953524 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 9 09:28:38.953611 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 9 09:28:38.953721 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 9 09:28:38.953816 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 9 09:28:38.953904 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 9 09:28:38.953990 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 9 09:28:38.954091 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 9 09:28:38.954180 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 9 09:28:38.954274 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 9 09:28:38.954361 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 9 09:28:38.954449 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 9 09:28:38.954536 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 09:28:38.955838 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 9 09:28:38.955941 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 9 09:28:38.956045 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 9 09:28:38.956135 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 9 09:28:38.956223 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 9 09:28:38.956318 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 9 09:28:38.956408 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 9 09:28:38.956501 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 9 09:28:38.956589 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 9 09:28:38.960147 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 09:28:38.960241 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 9 09:28:38.960329 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 9 09:28:38.960427 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 09:28:38.960516 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 9 09:28:38.960610 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 9 09:28:38.960777 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 9 09:28:38.960793 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 09:28:38.960802 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 09:28:38.960812 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 09:28:38.960821 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 09:28:38.960830 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 9 09:28:38.960840 kernel: iommu: Default domain type: Translated Jul 9 09:28:38.960849 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 09:28:38.960862 kernel: PCI: Using ACPI for IRQ routing Jul 9 09:28:38.960871 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 09:28:38.960880 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 09:28:38.960889 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 9 09:28:38.960982 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 9 09:28:38.961072 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 9 09:28:38.961159 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 09:28:38.961173 kernel: vgaarb: loaded Jul 9 09:28:38.961186 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 09:28:38.961195 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 09:28:38.961204 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 09:28:38.961214 kernel: pnp: PnP ACPI init Jul 9 09:28:38.961311 kernel: pnp 00:03: [dma 2] Jul 9 09:28:38.961336 kernel: pnp: PnP ACPI: found 5 devices Jul 9 09:28:38.961345 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 09:28:38.961354 kernel: NET: Registered PF_INET protocol family Jul 9 09:28:38.961364 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 09:28:38.961376 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 09:28:38.961385 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 09:28:38.961394 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 09:28:38.961403 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 09:28:38.961413 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 09:28:38.961422 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:28:38.961431 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:28:38.961440 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 09:28:38.961449 kernel: NET: Registered PF_XDP protocol family Jul 9 09:28:38.961532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 09:28:38.961608 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 09:28:38.961706 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 09:28:38.961782 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 9 09:28:38.961858 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 9 09:28:38.961945 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 9 09:28:38.962032 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 9 09:28:38.962050 kernel: PCI: CLS 0 bytes, default 64 Jul 9 09:28:38.962059 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 9 09:28:38.962069 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 9 09:28:38.962078 kernel: Initialise system trusted keyrings Jul 9 09:28:38.962087 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 09:28:38.962097 kernel: Key type asymmetric registered Jul 9 09:28:38.962106 kernel: Asymmetric key parser 'x509' registered Jul 9 09:28:38.962115 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 09:28:38.962124 kernel: io scheduler mq-deadline registered Jul 9 09:28:38.962135 kernel: io scheduler kyber registered Jul 9 09:28:38.962145 kernel: io scheduler bfq registered Jul 9 09:28:38.962154 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 09:28:38.962163 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 9 09:28:38.962173 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 9 09:28:38.962182 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 9 09:28:38.962191 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 9 09:28:38.962200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 09:28:38.962210 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 09:28:38.962221 kernel: random: crng init done Jul 9 09:28:38.962230 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 09:28:38.962239 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 09:28:38.962248 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 09:28:38.962335 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 09:28:38.962350 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 09:28:38.962427 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 09:28:38.962505 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T09:28:38 UTC (1752053318) Jul 9 09:28:38.962588 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 9 09:28:38.962601 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 09:28:38.962611 kernel: NET: Registered PF_INET6 protocol family Jul 9 09:28:38.962620 kernel: Segment Routing with IPv6 Jul 9 09:28:38.962654 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 09:28:38.962664 kernel: NET: Registered PF_PACKET protocol family Jul 9 09:28:38.962673 kernel: Key type dns_resolver registered Jul 9 09:28:38.962709 kernel: IPI shorthand broadcast: enabled Jul 9 09:28:38.962719 kernel: sched_clock: Marking stable (3627008391, 192691035)->(3829841327, -10141901) Jul 9 09:28:38.962732 kernel: registered taskstats version 1 Jul 9 09:28:38.962741 kernel: Loading compiled-in X.509 certificates Jul 9 09:28:38.962750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 979ef2c0f02e8e58776916c0ada334818b3eaefe' Jul 9 09:28:38.962760 kernel: Demotion targets for Node 0: null Jul 9 09:28:38.962769 kernel: Key type .fscrypt registered Jul 9 09:28:38.962778 kernel: Key type fscrypt-provisioning registered Jul 9 09:28:38.962787 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 09:28:38.962796 kernel: ima: Allocated hash algorithm: sha1 Jul 9 09:28:38.962805 kernel: ima: No architecture policies found Jul 9 09:28:38.962816 kernel: clk: Disabling unused clocks Jul 9 09:28:38.962825 kernel: Warning: unable to open an initial console. Jul 9 09:28:38.962835 kernel: Freeing unused kernel image (initmem) memory: 54592K Jul 9 09:28:38.962844 kernel: Write protecting the kernel read-only data: 24576k Jul 9 09:28:38.962853 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 09:28:38.962862 kernel: Run /init as init process Jul 9 09:28:38.962871 kernel: with arguments: Jul 9 09:28:38.962880 kernel: /init Jul 9 09:28:38.962889 kernel: with environment: Jul 9 09:28:38.962899 kernel: HOME=/ Jul 9 09:28:38.962908 kernel: TERM=linux Jul 9 09:28:38.962917 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 09:28:38.962928 systemd[1]: Successfully made /usr/ read-only. Jul 9 09:28:38.962941 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:28:38.962952 systemd[1]: Detected virtualization kvm. Jul 9 09:28:38.962962 systemd[1]: Detected architecture x86-64. Jul 9 09:28:38.962982 systemd[1]: Running in initrd. Jul 9 09:28:38.962995 systemd[1]: No hostname configured, using default hostname. Jul 9 09:28:38.963006 systemd[1]: Hostname set to . Jul 9 09:28:38.963016 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:28:38.963026 systemd[1]: Queued start job for default target initrd.target. Jul 9 09:28:38.963036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:28:38.963049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:28:38.963060 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 09:28:38.963070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:28:38.963081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 09:28:38.963091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 09:28:38.963103 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 09:28:38.963113 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 09:28:38.963125 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:28:38.963135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:28:38.963145 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:28:38.963155 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:28:38.963165 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:28:38.963175 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:28:38.963187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:28:38.963197 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:28:38.963209 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 09:28:38.963219 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 09:28:38.963231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:28:38.963242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:28:38.963252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:28:38.963262 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:28:38.963272 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 09:28:38.963282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:28:38.963292 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 09:28:38.963304 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 09:28:38.963315 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 09:28:38.963326 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:28:38.963336 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:28:38.963347 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:28:38.963381 systemd-journald[212]: Collecting audit messages is disabled. Jul 9 09:28:38.963405 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 09:28:38.963419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:28:38.963430 systemd-journald[212]: Journal started Jul 9 09:28:38.963452 systemd-journald[212]: Runtime Journal (/run/log/journal/f8df45cdd24945d7a9a91ae444bf579e) is 8M, max 78.5M, 70.5M free. Jul 9 09:28:38.969702 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 09:28:38.968444 systemd-modules-load[214]: Inserted module 'overlay' Jul 9 09:28:38.980664 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:28:38.986736 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 09:28:39.004653 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 09:28:39.006676 kernel: Bridge firewalling registered Jul 9 09:28:39.006718 systemd-modules-load[214]: Inserted module 'br_netfilter' Jul 9 09:28:39.009749 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:28:39.054789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:28:39.055727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:28:39.061694 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:28:39.064787 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 09:28:39.067929 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 09:28:39.086625 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:28:39.091851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:28:39.098752 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:28:39.105550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:28:39.108747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:28:39.121321 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:28:39.123198 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 09:28:39.140145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:28:39.162913 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=3331cec13b1d190d4054e739c6cc72d1bbe47015265c5d8f0c303fc32f06c18e Jul 9 09:28:39.172125 systemd-resolved[247]: Positive Trust Anchors: Jul 9 09:28:39.172139 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:28:39.172181 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:28:39.174798 systemd-resolved[247]: Defaulting to hostname 'linux'. Jul 9 09:28:39.178034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:28:39.179229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:28:39.262652 kernel: SCSI subsystem initialized Jul 9 09:28:39.273645 kernel: Loading iSCSI transport class v2.0-870. Jul 9 09:28:39.284657 kernel: iscsi: registered transport (tcp) Jul 9 09:28:39.307277 kernel: iscsi: registered transport (qla4xxx) Jul 9 09:28:39.307312 kernel: QLogic iSCSI HBA Driver Jul 9 09:28:39.337836 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 09:28:39.365600 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:28:39.367988 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 09:28:39.470451 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 09:28:39.475968 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 09:28:39.565740 kernel: raid6: sse2x4 gen() 6468 MB/s Jul 9 09:28:39.583735 kernel: raid6: sse2x2 gen() 14015 MB/s Jul 9 09:28:39.602128 kernel: raid6: sse2x1 gen() 9684 MB/s Jul 9 09:28:39.602190 kernel: raid6: using algorithm sse2x2 gen() 14015 MB/s Jul 9 09:28:39.621166 kernel: raid6: .... xor() 9201 MB/s, rmw enabled Jul 9 09:28:39.621228 kernel: raid6: using ssse3x2 recovery algorithm Jul 9 09:28:39.643726 kernel: xor: measuring software checksum speed Jul 9 09:28:39.643789 kernel: prefetch64-sse : 16904 MB/sec Jul 9 09:28:39.646096 kernel: generic_sse : 15722 MB/sec Jul 9 09:28:39.646155 kernel: xor: using function: prefetch64-sse (16904 MB/sec) Jul 9 09:28:39.873772 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 09:28:39.886020 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:28:39.889386 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:28:39.969872 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 9 09:28:39.983402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:28:39.994141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 09:28:40.027200 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jul 9 09:28:40.081685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:28:40.086853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:28:40.182805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:28:40.194842 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 09:28:40.280662 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 9 09:28:40.285874 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 9 09:28:40.300649 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 09:28:40.300771 kernel: GPT:17805311 != 20971519 Jul 9 09:28:40.300792 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 09:28:40.300812 kernel: GPT:17805311 != 20971519 Jul 9 09:28:40.300828 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 09:28:40.300847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:28:40.332593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:28:40.335383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:28:40.375916 kernel: libata version 3.00 loaded. Jul 9 09:28:40.375942 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 9 09:28:40.376136 kernel: scsi host0: ata_piix Jul 9 09:28:40.376294 kernel: scsi host1: ata_piix Jul 9 09:28:40.374297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:28:40.383831 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 9 09:28:40.383859 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 9 09:28:40.379983 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:28:40.391267 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 9 09:28:40.393262 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:28:40.452655 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 09:28:40.473380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:28:40.484513 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 09:28:40.495209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:28:40.503707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 09:28:40.504304 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 09:28:40.507722 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 09:28:40.542197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:28:40.543792 disk-uuid[559]: Primary Header is updated. Jul 9 09:28:40.543792 disk-uuid[559]: Secondary Entries is updated. Jul 9 09:28:40.543792 disk-uuid[559]: Secondary Header is updated. Jul 9 09:28:40.763677 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 09:28:40.779707 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:28:40.780946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:28:40.782167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:28:40.784228 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 09:28:40.827986 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:28:41.623930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:28:41.625770 disk-uuid[560]: The operation has completed successfully. Jul 9 09:28:41.722989 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 09:28:41.723158 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 09:28:41.772183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 09:28:41.788506 sh[585]: Success Jul 9 09:28:41.833302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 09:28:41.833445 kernel: device-mapper: uevent: version 1.0.3 Jul 9 09:28:41.841709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 09:28:41.868709 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 9 09:28:41.976111 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 09:28:41.983879 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 09:28:42.001721 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 09:28:42.030700 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 09:28:42.038743 kernel: BTRFS: device fsid 8a7b8c84-7fe6-440f-95a1-3ff425e81fda devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (597) Jul 9 09:28:42.046805 kernel: BTRFS info (device dm-0): first mount of filesystem 8a7b8c84-7fe6-440f-95a1-3ff425e81fda Jul 9 09:28:42.046909 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 09:28:42.051107 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 09:28:42.072044 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 09:28:42.074293 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 09:28:42.076375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 09:28:42.078907 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 09:28:42.084902 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 09:28:42.134664 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (634) Jul 9 09:28:42.141466 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 9 09:28:42.141496 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 09:28:42.141515 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:28:42.152675 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 9 09:28:42.154081 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 09:28:42.156751 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 09:28:42.244848 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:28:42.251698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:28:42.311138 systemd-networkd[769]: lo: Link UP Jul 9 09:28:42.311149 systemd-networkd[769]: lo: Gained carrier Jul 9 09:28:42.316478 systemd-networkd[769]: Enumeration completed Jul 9 09:28:42.317028 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:28:42.317035 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:28:42.317875 systemd-networkd[769]: eth0: Link UP Jul 9 09:28:42.317881 systemd-networkd[769]: eth0: Gained carrier Jul 9 09:28:42.317889 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:28:42.320179 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:28:42.329145 systemd[1]: Reached target network.target - Network. Jul 9 09:28:42.331673 systemd-networkd[769]: eth0: DHCPv4 address 172.24.4.7/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 09:28:42.384943 ignition[681]: Ignition 2.21.0 Jul 9 09:28:42.384954 ignition[681]: Stage: fetch-offline Jul 9 09:28:42.384984 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:42.384993 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:42.385077 ignition[681]: parsed url from cmdline: "" Jul 9 09:28:42.387804 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:28:42.385081 ignition[681]: no config URL provided Jul 9 09:28:42.385086 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 09:28:42.385093 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jul 9 09:28:42.385097 ignition[681]: failed to fetch config: resource requires networking Jul 9 09:28:42.390795 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 09:28:42.386062 ignition[681]: Ignition finished successfully Jul 9 09:28:42.419117 ignition[780]: Ignition 2.21.0 Jul 9 09:28:42.419136 ignition[780]: Stage: fetch Jul 9 09:28:42.419283 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:42.419294 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:42.419377 ignition[780]: parsed url from cmdline: "" Jul 9 09:28:42.419381 ignition[780]: no config URL provided Jul 9 09:28:42.419386 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 09:28:42.419394 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jul 9 09:28:42.419498 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 9 09:28:42.419777 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 9 09:28:42.419811 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 9 09:28:42.874185 ignition[780]: GET result: OK Jul 9 09:28:42.874957 ignition[780]: parsing config with SHA512: 22e9e0744f834bc8735aa24d2a287251d73fe2fef48a2540d905ed78ec29ed8e7e6a6f0d708fc05662fc02ccd1bda9aab3ace33b0fac15dcdd109117dca63500 Jul 9 09:28:42.895924 unknown[780]: fetched base config from "system" Jul 9 09:28:42.895949 unknown[780]: fetched base config from "system" Jul 9 09:28:42.896933 ignition[780]: fetch: fetch complete Jul 9 09:28:42.895963 unknown[780]: fetched user config from "openstack" Jul 9 09:28:42.896945 ignition[780]: fetch: fetch passed Jul 9 09:28:42.897030 ignition[780]: Ignition finished successfully Jul 9 09:28:42.903562 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 09:28:42.908452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 09:28:42.984716 ignition[786]: Ignition 2.21.0 Jul 9 09:28:42.984736 ignition[786]: Stage: kargs Jul 9 09:28:42.985048 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:42.985074 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:42.987907 ignition[786]: kargs: kargs passed Jul 9 09:28:42.988172 ignition[786]: Ignition finished successfully Jul 9 09:28:42.993027 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 09:28:42.998147 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 09:28:43.038808 ignition[793]: Ignition 2.21.0 Jul 9 09:28:43.038822 ignition[793]: Stage: disks Jul 9 09:28:43.039598 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:43.039609 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:43.041090 ignition[793]: disks: disks passed Jul 9 09:28:43.044937 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 09:28:43.041552 ignition[793]: Ignition finished successfully Jul 9 09:28:43.048323 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 09:28:43.049029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 09:28:43.050907 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:28:43.052645 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:28:43.054785 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:28:43.058022 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 09:28:43.098838 systemd-fsck[802]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 9 09:28:43.115399 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 09:28:43.122934 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 09:28:43.318686 kernel: EXT4-fs (vda9): mounted filesystem 29d3077b-4f9b-456e-9d11-186262f0abd5 r/w with ordered data mode. Quota mode: none. Jul 9 09:28:43.319184 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 09:28:43.320739 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 09:28:43.325224 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:28:43.328797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 09:28:43.337967 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 09:28:43.339131 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 9 09:28:43.344832 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 09:28:43.344912 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:28:43.362888 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 09:28:43.367229 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 09:28:43.371679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (810) Jul 9 09:28:43.378652 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 9 09:28:43.378757 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 09:28:43.378772 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:28:43.391853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:28:43.490712 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:28:43.540503 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 09:28:43.546969 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jul 9 09:28:43.554674 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 09:28:43.561243 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 09:28:43.669959 systemd-networkd[769]: eth0: Gained IPv6LL Jul 9 09:28:43.737300 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 09:28:43.741826 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 09:28:43.745868 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 09:28:43.778182 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 09:28:43.787020 kernel: BTRFS info (device vda6): last unmount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 9 09:28:43.809585 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 09:28:43.820175 ignition[928]: INFO : Ignition 2.21.0 Jul 9 09:28:43.822749 ignition[928]: INFO : Stage: mount Jul 9 09:28:43.822749 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:43.822749 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:43.826321 ignition[928]: INFO : mount: mount passed Jul 9 09:28:43.826321 ignition[928]: INFO : Ignition finished successfully Jul 9 09:28:43.828157 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 09:28:44.532666 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:28:46.545705 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:28:50.557725 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:28:50.571144 coreos-metadata[812]: Jul 09 09:28:50.571 WARN failed to locate config-drive, using the metadata service API instead Jul 9 09:28:50.615292 coreos-metadata[812]: Jul 09 09:28:50.615 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 09:28:50.631865 coreos-metadata[812]: Jul 09 09:28:50.631 INFO Fetch successful Jul 9 09:28:50.631865 coreos-metadata[812]: Jul 09 09:28:50.631 INFO wrote hostname ci-4386-0-0-w-15e87cee3a.novalocal to /sysroot/etc/hostname Jul 9 09:28:50.635773 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 9 09:28:50.636006 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 9 09:28:50.642618 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 09:28:50.674088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:28:50.711752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (945) Jul 9 09:28:50.719213 kernel: BTRFS info (device vda6): first mount of filesystem f90b1c2f-8ecc-4345-b0e5-5297e348f3bd Jul 9 09:28:50.719283 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 09:28:50.722700 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 09:28:50.739424 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:28:50.794483 ignition[963]: INFO : Ignition 2.21.0 Jul 9 09:28:50.794483 ignition[963]: INFO : Stage: files Jul 9 09:28:50.797566 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:50.797566 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:50.797566 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jul 9 09:28:50.804070 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 09:28:50.804070 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 09:28:50.804070 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 09:28:50.804070 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 09:28:50.804070 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 09:28:50.803920 unknown[963]: wrote ssh authorized keys file for user: core Jul 9 09:28:50.815204 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 09:28:50.815204 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 9 09:28:50.918049 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 09:28:53.428768 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 09:28:53.450591 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 9 09:28:54.340982 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 9 09:28:56.422280 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 9 09:28:56.425336 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 9 09:28:56.435453 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 09:28:56.452493 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 09:28:56.453526 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 9 09:28:56.453526 ignition[963]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 9 09:28:56.453526 ignition[963]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 09:28:56.457398 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:28:56.458307 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:28:56.458307 ignition[963]: INFO : files: files passed Jul 9 09:28:56.461329 ignition[963]: INFO : Ignition finished successfully Jul 9 09:28:56.465316 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 09:28:56.470450 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 09:28:56.472049 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 09:28:56.503568 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 09:28:56.503929 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 09:28:56.509559 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:28:56.509559 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:28:56.516526 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:28:56.515828 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:28:56.517500 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 09:28:56.520526 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 09:28:56.581289 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 09:28:56.581495 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 09:28:56.583904 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 09:28:56.585995 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 09:28:56.588344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 09:28:56.589895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 09:28:56.644114 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:28:56.650082 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 09:28:56.700758 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:28:56.702902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:28:56.706703 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 09:28:56.711015 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 09:28:56.711956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:28:56.716960 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 09:28:56.719041 systemd[1]: Stopped target basic.target - Basic System. Jul 9 09:28:56.723397 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 09:28:56.727124 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:28:56.730262 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 09:28:56.733863 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 09:28:56.737479 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 09:28:56.740993 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:28:56.744165 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 09:28:56.747035 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 09:28:56.750004 systemd[1]: Stopped target swap.target - Swaps. Jul 9 09:28:56.752718 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 09:28:56.753212 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:28:56.756790 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:28:56.760545 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:28:56.765614 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 09:28:56.767235 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:28:56.770774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 09:28:56.771554 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 09:28:56.777149 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 09:28:56.777953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:28:56.780754 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 09:28:56.781249 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 09:28:56.788448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 09:28:56.796723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 09:28:56.798231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 09:28:56.800971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:28:56.804436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 09:28:56.805240 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:28:56.814326 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 09:28:56.815159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 09:28:56.844082 ignition[1017]: INFO : Ignition 2.21.0 Jul 9 09:28:56.844082 ignition[1017]: INFO : Stage: umount Jul 9 09:28:56.847399 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:28:56.847399 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 09:28:56.846854 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 09:28:56.851469 ignition[1017]: INFO : umount: umount passed Jul 9 09:28:56.851469 ignition[1017]: INFO : Ignition finished successfully Jul 9 09:28:56.852999 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 09:28:56.853717 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 09:28:56.855119 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 09:28:56.855247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 09:28:56.857252 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 09:28:56.857370 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 09:28:56.858039 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 09:28:56.858092 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 09:28:56.859212 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 09:28:56.859316 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 09:28:56.860464 systemd[1]: Stopped target network.target - Network. Jul 9 09:28:56.861771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 09:28:56.861901 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:28:56.863086 systemd[1]: Stopped target paths.target - Path Units. Jul 9 09:28:56.864234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 09:28:56.867699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:28:56.868397 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 09:28:56.869794 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 09:28:56.871300 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 09:28:56.871369 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:28:56.872466 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 09:28:56.872516 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:28:56.873714 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 09:28:56.873795 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 09:28:56.874859 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 09:28:56.874985 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 09:28:56.876078 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 09:28:56.876157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 09:28:56.877647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 09:28:56.879433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 09:28:56.892549 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 09:28:56.892810 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 09:28:56.895976 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 09:28:56.896208 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 09:28:56.896917 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 09:28:56.896974 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:28:56.898983 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 09:28:56.900931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 09:28:56.900991 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:28:56.902264 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:28:56.903947 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 09:28:56.907389 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 09:28:56.912593 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 09:28:56.915204 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 09:28:56.915647 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:28:56.922984 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 09:28:56.923098 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 09:28:56.925294 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 09:28:56.925355 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:28:56.927384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 09:28:56.927519 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:28:56.929485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 09:28:56.929569 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 09:28:56.930901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 09:28:56.930997 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:28:56.934843 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 09:28:56.936433 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 09:28:56.936552 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:28:56.939590 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 09:28:56.939681 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:28:56.941674 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 09:28:56.941762 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 09:28:56.943073 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 09:28:56.943127 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:28:56.943778 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 09:28:56.943825 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:28:56.944506 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 09:28:56.944558 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:28:56.945183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 09:28:56.945233 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:28:56.948802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:28:56.948854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:28:56.953846 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 09:28:56.953921 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 09:28:56.953970 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 09:28:56.954012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 09:28:56.954063 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:28:56.954107 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:28:56.954589 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 09:28:56.955802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 09:28:56.957256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 09:28:56.957367 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 09:28:56.959036 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 09:28:56.961061 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 09:28:56.984864 systemd[1]: Switching root. Jul 9 09:28:57.035131 systemd-journald[212]: Journal stopped Jul 9 09:28:58.723084 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jul 9 09:28:58.723192 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 09:28:58.723218 kernel: SELinux: policy capability open_perms=1 Jul 9 09:28:58.723246 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 09:28:58.723263 kernel: SELinux: policy capability always_check_network=0 Jul 9 09:28:58.723274 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 09:28:58.723294 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 09:28:58.723311 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 09:28:58.723322 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 09:28:58.723345 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 09:28:58.723357 kernel: audit: type=1403 audit(1752053337.667:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 09:28:58.723385 systemd[1]: Successfully loaded SELinux policy in 79.317ms. Jul 9 09:28:58.723409 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.765ms. Jul 9 09:28:58.723423 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:28:58.723441 systemd[1]: Detected virtualization kvm. Jul 9 09:28:58.723454 systemd[1]: Detected architecture x86-64. Jul 9 09:28:58.723466 systemd[1]: Detected first boot. Jul 9 09:28:58.723484 systemd[1]: Hostname set to . Jul 9 09:28:58.723497 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:28:58.723509 zram_generator::config[1064]: No configuration found. Jul 9 09:28:58.723529 kernel: Guest personality initialized and is inactive Jul 9 09:28:58.723541 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 09:28:58.723553 kernel: Initialized host personality Jul 9 09:28:58.723573 kernel: NET: Registered PF_VSOCK protocol family Jul 9 09:28:58.723587 systemd[1]: Populated /etc with preset unit settings. Jul 9 09:28:58.723600 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 09:28:58.723613 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 09:28:58.724411 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 09:28:58.724448 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 09:28:58.724467 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 09:28:58.724489 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 09:28:58.724506 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 09:28:58.724519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 09:28:58.724536 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 09:28:58.724549 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 09:28:58.724562 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 09:28:58.724582 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 09:28:58.724600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:28:58.724614 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:28:58.724709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 09:28:58.724724 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 09:28:58.724737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 09:28:58.724760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:28:58.724779 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 09:28:58.724791 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:28:58.724809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:28:58.724821 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 09:28:58.724834 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 09:28:58.724846 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 09:28:58.724859 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 09:28:58.724871 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:28:58.724884 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:28:58.724907 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:28:58.724920 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:28:58.724932 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 09:28:58.724945 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 09:28:58.724957 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 09:28:58.724969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:28:58.724987 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:28:58.725000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:28:58.725018 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 09:28:58.725037 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 09:28:58.725050 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 09:28:58.725068 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 09:28:58.725081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:28:58.725093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 09:28:58.725106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 09:28:58.725118 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 09:28:58.725131 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 09:28:58.725151 systemd[1]: Reached target machines.target - Containers. Jul 9 09:28:58.725169 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 09:28:58.725181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:28:58.725194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:28:58.725206 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 09:28:58.725219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:28:58.725231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:28:58.725243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:28:58.725255 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 09:28:58.725280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:28:58.725293 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 09:28:58.725306 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 09:28:58.725318 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 09:28:58.725331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 09:28:58.725343 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 09:28:58.725356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:28:58.725369 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:28:58.725395 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:28:58.725408 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 09:28:58.725421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 09:28:58.725434 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 09:28:58.725446 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:28:58.725465 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 09:28:58.725480 systemd[1]: Stopped verity-setup.service. Jul 9 09:28:58.725493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:28:58.725506 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 09:28:58.725542 systemd-journald[1155]: Collecting audit messages is disabled. Jul 9 09:28:58.725586 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 09:28:58.725600 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 09:28:58.725613 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 09:28:58.725672 systemd-journald[1155]: Journal started Jul 9 09:28:58.725702 systemd-journald[1155]: Runtime Journal (/run/log/journal/f8df45cdd24945d7a9a91ae444bf579e) is 8M, max 78.5M, 70.5M free. Jul 9 09:28:58.367537 systemd[1]: Queued start job for default target multi-user.target. Jul 9 09:28:58.388241 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 09:28:58.388850 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 09:28:58.733659 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:28:58.738204 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 09:28:58.746823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 09:28:58.748698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:28:58.753008 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 09:28:58.753585 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 09:28:58.755018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:28:58.755719 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:28:58.757120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:28:58.758132 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:28:58.760087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:28:58.761676 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:28:58.762463 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 09:28:58.776092 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 09:28:58.781496 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 09:28:58.784655 kernel: ACPI: bus type drm_connector registered Jul 9 09:28:58.787761 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 09:28:58.788464 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 09:28:58.788505 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:28:58.789655 kernel: loop: module loaded Jul 9 09:28:58.797864 kernel: fuse: init (API version 7.41) Jul 9 09:28:58.797219 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 09:28:58.807823 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 09:28:58.809663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:28:58.812882 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 09:28:58.815269 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 09:28:58.815875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:28:58.820377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 09:28:58.823742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:28:58.828060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 09:28:58.834870 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 09:28:58.842032 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 09:28:58.842872 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:28:58.843535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:28:58.844405 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 09:28:58.847760 systemd-journald[1155]: Time spent on flushing to /var/log/journal/f8df45cdd24945d7a9a91ae444bf579e is 66.141ms for 967 entries. Jul 9 09:28:58.847760 systemd-journald[1155]: System Journal (/var/log/journal/f8df45cdd24945d7a9a91ae444bf579e) is 8M, max 584.8M, 576.8M free. Jul 9 09:28:58.930306 systemd-journald[1155]: Received client request to flush runtime journal. Jul 9 09:28:58.930360 kernel: loop0: detected capacity change from 0 to 114000 Jul 9 09:28:58.849955 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 09:28:58.851746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:28:58.851935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:28:58.853266 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 09:28:58.859148 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 09:28:58.865012 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 09:28:58.867789 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 09:28:58.883547 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 09:28:58.885740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:28:58.892685 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 09:28:58.896739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:28:58.928571 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 9 09:28:58.928586 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 9 09:28:58.936223 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 09:28:58.938529 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:28:58.958814 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 09:28:58.968197 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:28:58.977017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 09:28:59.001665 kernel: loop1: detected capacity change from 0 to 146488 Jul 9 09:28:58.999756 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 09:28:59.060838 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 09:28:59.065489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:28:59.073873 kernel: loop2: detected capacity change from 0 to 8 Jul 9 09:28:59.098678 kernel: loop3: detected capacity change from 0 to 221472 Jul 9 09:28:59.113654 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 9 09:28:59.114425 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 9 09:28:59.122294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:28:59.154676 kernel: loop4: detected capacity change from 0 to 114000 Jul 9 09:28:59.213673 kernel: loop5: detected capacity change from 0 to 146488 Jul 9 09:28:59.273218 kernel: loop6: detected capacity change from 0 to 8 Jul 9 09:28:59.278674 kernel: loop7: detected capacity change from 0 to 221472 Jul 9 09:28:59.356841 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 9 09:28:59.358041 (sd-merge)[1227]: Merged extensions into '/usr'. Jul 9 09:28:59.364578 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 09:28:59.364609 systemd[1]: Reloading... Jul 9 09:28:59.456659 zram_generator::config[1252]: No configuration found. Jul 9 09:28:59.611976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:28:59.718354 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 09:28:59.718510 systemd[1]: Reloading finished in 353 ms. Jul 9 09:28:59.740475 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 09:28:59.748191 systemd[1]: Starting ensure-sysext.service... Jul 9 09:28:59.759201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:28:59.775502 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 09:28:59.779476 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Jul 9 09:28:59.779561 systemd[1]: Reloading... Jul 9 09:28:59.812261 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 09:28:59.812301 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 09:28:59.812570 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 09:28:59.812863 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 09:28:59.813771 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 09:28:59.814066 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Jul 9 09:28:59.814120 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Jul 9 09:28:59.820693 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:28:59.820704 systemd-tmpfiles[1309]: Skipping /boot Jul 9 09:28:59.829552 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:28:59.829568 systemd-tmpfiles[1309]: Skipping /boot Jul 9 09:28:59.892844 zram_generator::config[1354]: No configuration found. Jul 9 09:28:59.900661 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 09:28:59.993006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:29:00.096851 systemd[1]: Reloading finished in 316 ms. Jul 9 09:29:00.108336 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 09:29:00.115085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:29:00.132775 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:29:00.135933 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 09:29:00.138158 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 09:29:00.142009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:29:00.146098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:29:00.152974 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 09:29:00.159353 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.159540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:29:00.161886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:29:00.171234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:29:00.176153 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:29:00.177762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:29:00.177892 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:29:00.178005 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.182142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.182581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:29:00.182896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:29:00.183044 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:29:00.186920 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 09:29:00.187523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.192459 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.192718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:29:00.195808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:29:00.197143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:29:00.197268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:29:00.197436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 09:29:00.205547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:29:00.206546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:29:00.208459 systemd[1]: Finished ensure-sysext.service. Jul 9 09:29:00.219311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 09:29:00.233343 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 09:29:00.238882 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Jul 9 09:29:00.254571 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:29:00.259966 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:29:00.261951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:29:00.267013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 09:29:00.271127 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 09:29:00.273177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:29:00.273401 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:29:00.274448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:29:00.277469 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:29:00.277867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:29:00.292188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:29:00.297336 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:29:00.323101 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 09:29:00.339850 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 09:29:00.341205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 09:29:00.345265 augenrules[1451]: No rules Jul 9 09:29:00.345867 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:29:00.346764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:29:00.357129 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 09:29:00.479142 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 09:29:00.552359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:29:00.555683 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 09:29:00.572661 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 09:29:00.590965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 09:29:00.608696 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 09:29:00.659057 systemd-networkd[1435]: lo: Link UP Jul 9 09:29:00.659663 systemd-networkd[1435]: lo: Gained carrier Jul 9 09:29:00.660169 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 09:29:00.661514 systemd-networkd[1435]: Enumeration completed Jul 9 09:29:00.661747 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 09:29:00.662659 systemd-timesyncd[1413]: No network connectivity, watching for changes. Jul 9 09:29:00.663746 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:29:00.664126 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:29:00.664451 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:29:00.664666 kernel: ACPI: button: Power Button [PWRF] Jul 9 09:29:00.665950 systemd-networkd[1435]: eth0: Link UP Jul 9 09:29:00.666268 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 09:29:00.666575 systemd-networkd[1435]: eth0: Gained carrier Jul 9 09:29:00.667686 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:29:00.670077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 09:29:00.676668 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 9 09:29:00.685708 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 09:29:00.689763 systemd-networkd[1435]: eth0: DHCPv4 address 172.24.4.7/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 09:29:00.690761 systemd-timesyncd[1413]: Network configuration changed, trying to establish connection. Jul 9 09:29:00.709178 systemd-resolved[1397]: Positive Trust Anchors: Jul 9 09:29:00.710010 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:29:00.710056 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:29:00.716168 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 09:29:00.718913 systemd-resolved[1397]: Using system hostname 'ci-4386-0-0-w-15e87cee3a.novalocal'. Jul 9 09:29:00.720890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:29:00.721746 systemd[1]: Reached target network.target - Network. Jul 9 09:29:00.722816 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:29:00.723937 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:29:00.724535 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 09:29:00.725852 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 09:29:00.726746 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 09:29:00.728855 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 09:29:00.729464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 09:29:00.730045 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 09:29:00.730585 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 09:29:00.730616 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:29:00.731094 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:29:00.732525 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 09:29:00.735140 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 09:29:00.738582 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 09:29:00.740227 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 09:29:00.741405 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 09:29:00.748465 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 09:29:00.749303 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 09:29:00.750977 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 09:29:00.753136 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:29:00.754114 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:29:00.754703 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:29:00.754733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:29:00.757290 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 09:29:00.760141 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 09:29:00.762542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 09:29:00.768399 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 09:29:00.774918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 09:29:00.781943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 09:29:00.782531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 09:29:00.785659 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:00.788698 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 09:29:00.795199 jq[1512]: false Jul 9 09:29:00.795939 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 09:29:00.804714 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 09:29:00.808869 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 09:29:00.813883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 09:29:00.823431 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 09:29:00.824894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 09:29:00.825448 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 09:29:00.826509 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Jul 9 09:29:00.828662 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Jul 9 09:29:00.829903 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 09:29:00.837873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 09:29:00.840959 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Jul 9 09:29:00.840959 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 09:29:00.840959 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Jul 9 09:29:00.840244 oslogin_cache_refresh[1515]: Failure getting users, quitting Jul 9 09:29:00.840264 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 09:29:00.840314 oslogin_cache_refresh[1515]: Refreshing group entry cache Jul 9 09:29:00.842437 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 09:29:00.844448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 09:29:00.845866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 09:29:00.848654 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Jul 9 09:29:00.848654 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 09:29:00.846695 oslogin_cache_refresh[1515]: Failure getting groups, quitting Jul 9 09:29:00.846708 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 09:29:00.849800 systemd-timesyncd[1413]: Contacted time server 137.110.222.27:123 (1.flatcar.pool.ntp.org). Jul 9 09:29:00.849852 systemd-timesyncd[1413]: Initial clock synchronization to Wed 2025-07-09 09:29:00.824567 UTC. Jul 9 09:29:00.850179 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 09:29:00.858848 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 09:29:00.864326 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 09:29:00.864605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 09:29:00.873529 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 09:29:00.875329 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 09:29:00.892851 extend-filesystems[1513]: Found /dev/vda6 Jul 9 09:29:00.904003 jq[1527]: true Jul 9 09:29:00.909684 extend-filesystems[1513]: Found /dev/vda9 Jul 9 09:29:00.918663 update_engine[1525]: I20250709 09:29:00.918237 1525 main.cc:92] Flatcar Update Engine starting Jul 9 09:29:00.923663 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 9 09:29:00.925768 extend-filesystems[1513]: Checking size of /dev/vda9 Jul 9 09:29:00.996756 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 9 09:29:00.997003 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 9 09:29:00.997026 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 9 09:29:00.997045 kernel: Console: switching to colour dummy device 80x25 Jul 9 09:29:00.997063 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 9 09:29:00.997078 kernel: [drm] features: -context_init Jul 9 09:29:00.949242 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 09:29:00.949082 dbus-daemon[1510]: [system] SELinux support is enabled Jul 9 09:29:00.997382 tar[1534]: linux-amd64/helm Jul 9 09:29:00.997556 update_engine[1525]: I20250709 09:29:00.958165 1525 update_check_scheduler.cc:74] Next update check in 4m17s Jul 9 09:29:00.997594 extend-filesystems[1513]: Resized partition /dev/vda9 Jul 9 09:29:01.000695 extend-filesystems[1557]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 09:29:01.000695 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 09:29:01.000695 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 09:29:01.000695 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 9 09:29:01.005692 kernel: [drm] number of scanouts: 1 Jul 9 09:29:01.005728 kernel: [drm] number of cap sets: 0 Jul 9 09:29:01.005750 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 9 09:29:00.999800 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 09:29:01.005831 extend-filesystems[1513]: Resized filesystem in /dev/vda9 Jul 9 09:29:01.012997 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 09:29:01.013075 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 09:29:01.016249 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 09:29:01.016429 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 09:29:01.016599 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 09:29:01.016903 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 09:29:01.034820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:29:01.035953 jq[1548]: true Jul 9 09:29:01.039736 systemd[1]: Started update-engine.service - Update Engine. Jul 9 09:29:01.057285 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 09:29:01.137304 systemd-logind[1521]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 09:29:01.138615 systemd-logind[1521]: New seat seat0. Jul 9 09:29:01.143458 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 09:29:01.178305 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Jul 9 09:29:01.179416 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 09:29:01.185996 systemd[1]: Starting sshkeys.service... Jul 9 09:29:01.254277 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 9 09:29:01.257788 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 9 09:29:01.258429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:29:01.258663 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:29:01.259328 systemd-logind[1521]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 09:29:01.272972 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:29:01.288822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:29:01.324679 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:01.470986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:29:01.498491 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 09:29:01.519524 containerd[1547]: time="2025-07-09T09:29:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 09:29:01.523068 containerd[1547]: time="2025-07-09T09:29:01.523035156Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 09:29:01.554633 containerd[1547]: time="2025-07-09T09:29:01.552129580Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.472µs" Jul 9 09:29:01.556661 containerd[1547]: time="2025-07-09T09:29:01.556639448Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 09:29:01.556752 containerd[1547]: time="2025-07-09T09:29:01.556734247Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 09:29:01.556971 containerd[1547]: time="2025-07-09T09:29:01.556950824Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.558645262Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.558702069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.558774051Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.558790456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559018336Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559035371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559047185Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559056127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559143544Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559345626Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559650 containerd[1547]: time="2025-07-09T09:29:01.559374684Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 09:29:01.559898 containerd[1547]: time="2025-07-09T09:29:01.559386728Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 09:29:01.559898 containerd[1547]: time="2025-07-09T09:29:01.559422399Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 09:29:01.561686 containerd[1547]: time="2025-07-09T09:29:01.561659523Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 09:29:01.561804 containerd[1547]: time="2025-07-09T09:29:01.561786952Z" level=info msg="metadata content store policy set" policy=shared Jul 9 09:29:01.572881 containerd[1547]: time="2025-07-09T09:29:01.572857576Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 09:29:01.572985 containerd[1547]: time="2025-07-09T09:29:01.572967479Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 09:29:01.573057 containerd[1547]: time="2025-07-09T09:29:01.573041382Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 09:29:01.573144 containerd[1547]: time="2025-07-09T09:29:01.573125518Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 09:29:01.573221 containerd[1547]: time="2025-07-09T09:29:01.573203922Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 09:29:01.573287 containerd[1547]: time="2025-07-09T09:29:01.573273323Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 09:29:01.573356 containerd[1547]: time="2025-07-09T09:29:01.573341284Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 09:29:01.573426 containerd[1547]: time="2025-07-09T09:29:01.573410495Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 09:29:01.573485 containerd[1547]: time="2025-07-09T09:29:01.573472133Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 09:29:01.574660 containerd[1547]: time="2025-07-09T09:29:01.574643331Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 09:29:01.574749 containerd[1547]: time="2025-07-09T09:29:01.574709192Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 09:29:01.574825 containerd[1547]: time="2025-07-09T09:29:01.574806761Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 09:29:01.574985 containerd[1547]: time="2025-07-09T09:29:01.574965910Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 09:29:01.575069 containerd[1547]: time="2025-07-09T09:29:01.575053777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 09:29:01.575148 containerd[1547]: time="2025-07-09T09:29:01.575131291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 09:29:01.575209 containerd[1547]: time="2025-07-09T09:29:01.575194931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 09:29:01.575266 containerd[1547]: time="2025-07-09T09:29:01.575253188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 09:29:01.575325 containerd[1547]: time="2025-07-09T09:29:01.575311306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 09:29:01.575404 containerd[1547]: time="2025-07-09T09:29:01.575386289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 09:29:01.575477 containerd[1547]: time="2025-07-09T09:29:01.575462522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 09:29:01.575538 containerd[1547]: time="2025-07-09T09:29:01.575524452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 09:29:01.575606 containerd[1547]: time="2025-07-09T09:29:01.575589531Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 09:29:01.575706 containerd[1547]: time="2025-07-09T09:29:01.575690172Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 09:29:01.575818 containerd[1547]: time="2025-07-09T09:29:01.575801436Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 09:29:01.578352 containerd[1547]: time="2025-07-09T09:29:01.577651981Z" level=info msg="Start snapshots syncer" Jul 9 09:29:01.578352 containerd[1547]: time="2025-07-09T09:29:01.577692004Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 09:29:01.578352 containerd[1547]: time="2025-07-09T09:29:01.577977911Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578038200Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578115593Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578243292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578265969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578276092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578287936Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578343323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578388456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578403372Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578433821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578447265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578460649Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 09:29:01.578517 containerd[1547]: time="2025-07-09T09:29:01.578504772Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578521807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578533451Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578544224Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578554547Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578565511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578576464Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578593560Z" level=info msg="runtime interface created" Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578599441Z" level=info msg="created NRI interface" Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578608293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578642574Z" level=info msg="Connect containerd service" Jul 9 09:29:01.578884 containerd[1547]: time="2025-07-09T09:29:01.578676555Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 09:29:01.579531 containerd[1547]: time="2025-07-09T09:29:01.579500577Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 09:29:01.824124 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 09:29:01.835871 tar[1534]: linux-amd64/LICENSE Jul 9 09:29:01.836078 tar[1534]: linux-amd64/README.md Jul 9 09:29:01.845473 containerd[1547]: time="2025-07-09T09:29:01.845435757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 09:29:01.845736 containerd[1547]: time="2025-07-09T09:29:01.845571329Z" level=info msg="Start subscribing containerd event" Jul 9 09:29:01.845817 containerd[1547]: time="2025-07-09T09:29:01.845760917Z" level=info msg="Start recovering state" Jul 9 09:29:01.845875 containerd[1547]: time="2025-07-09T09:29:01.845851574Z" level=info msg="Start event monitor" Jul 9 09:29:01.845875 containerd[1547]: time="2025-07-09T09:29:01.845872471Z" level=info msg="Start cni network conf syncer for default" Jul 9 09:29:01.845940 containerd[1547]: time="2025-07-09T09:29:01.845881914Z" level=info msg="Start streaming server" Jul 9 09:29:01.845940 containerd[1547]: time="2025-07-09T09:29:01.845891127Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 09:29:01.845940 containerd[1547]: time="2025-07-09T09:29:01.845899050Z" level=info msg="runtime interface starting up..." Jul 9 09:29:01.845940 containerd[1547]: time="2025-07-09T09:29:01.845905051Z" level=info msg="starting plugins..." Jul 9 09:29:01.845940 containerd[1547]: time="2025-07-09T09:29:01.845919816Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 09:29:01.846050 containerd[1547]: time="2025-07-09T09:29:01.845716123Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 09:29:01.846719 containerd[1547]: time="2025-07-09T09:29:01.846082525Z" level=info msg="containerd successfully booted in 0.326978s" Jul 9 09:29:01.846171 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 09:29:01.852297 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 09:29:01.870538 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 09:29:01.873069 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 09:29:01.891982 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 09:29:01.892209 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 09:29:01.895308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 09:29:01.915203 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 09:29:01.923381 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 09:29:01.928514 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 09:29:01.929878 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 09:29:02.165212 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 9 09:29:02.170982 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 09:29:02.175399 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 09:29:02.181007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:02.185840 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 09:29:02.246566 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 09:29:02.480783 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:02.493033 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:04.449184 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 09:29:04.451903 systemd[1]: Started sshd@0-172.24.4.7:22-172.24.4.1:38948.service - OpenSSH per-connection server daemon (172.24.4.1:38948). Jul 9 09:29:04.471426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:04.484949 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:29:04.499417 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:04.513057 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:05.455499 sshd[1658]: Accepted publickey for core from 172.24.4.1 port 38948 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:05.457289 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:05.475803 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 09:29:05.478903 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 09:29:05.507974 systemd-logind[1521]: New session 1 of user core. Jul 9 09:29:05.529201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 09:29:05.535994 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 09:29:05.547962 (systemd)[1672]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 09:29:05.551515 systemd-logind[1521]: New session c1 of user core. Jul 9 09:29:05.700840 kubelet[1660]: E0709 09:29:05.700731 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:29:05.703182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:29:05.703353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:29:05.704291 systemd[1]: kubelet.service: Consumed 2.404s CPU time, 266.5M memory peak. Jul 9 09:29:05.742108 systemd[1672]: Queued start job for default target default.target. Jul 9 09:29:05.751702 systemd[1672]: Created slice app.slice - User Application Slice. Jul 9 09:29:05.751731 systemd[1672]: Reached target paths.target - Paths. Jul 9 09:29:05.751772 systemd[1672]: Reached target timers.target - Timers. Jul 9 09:29:05.754721 systemd[1672]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 09:29:05.766613 systemd[1672]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 09:29:05.766763 systemd[1672]: Reached target sockets.target - Sockets. Jul 9 09:29:05.766804 systemd[1672]: Reached target basic.target - Basic System. Jul 9 09:29:05.766840 systemd[1672]: Reached target default.target - Main User Target. Jul 9 09:29:05.766868 systemd[1672]: Startup finished in 199ms. Jul 9 09:29:05.767576 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 09:29:05.779323 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 09:29:06.278574 systemd[1]: Started sshd@1-172.24.4.7:22-172.24.4.1:38958.service - OpenSSH per-connection server daemon (172.24.4.1:38958). Jul 9 09:29:07.003805 login[1638]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 09:29:07.005615 login[1637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 09:29:07.019706 systemd-logind[1521]: New session 2 of user core. Jul 9 09:29:07.028041 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 09:29:07.032681 systemd-logind[1521]: New session 3 of user core. Jul 9 09:29:07.041779 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 09:29:07.987025 sshd[1684]: Accepted publickey for core from 172.24.4.1 port 38958 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:07.990025 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:08.005760 systemd-logind[1521]: New session 4 of user core. Jul 9 09:29:08.014125 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 09:29:08.527802 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:08.537719 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 09:29:08.548676 coreos-metadata[1588]: Jul 09 09:29:08.548 WARN failed to locate config-drive, using the metadata service API instead Jul 9 09:29:08.559600 coreos-metadata[1509]: Jul 09 09:29:08.559 WARN failed to locate config-drive, using the metadata service API instead Jul 9 09:29:08.604077 coreos-metadata[1509]: Jul 09 09:29:08.603 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 9 09:29:08.605335 coreos-metadata[1588]: Jul 09 09:29:08.604 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 9 09:29:08.628139 sshd[1716]: Connection closed by 172.24.4.1 port 38958 Jul 9 09:29:08.631375 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:08.658720 systemd[1]: sshd@1-172.24.4.7:22-172.24.4.1:38958.service: Deactivated successfully. Jul 9 09:29:08.663682 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 09:29:08.666409 systemd-logind[1521]: Session 4 logged out. Waiting for processes to exit. Jul 9 09:29:08.674031 systemd[1]: Started sshd@2-172.24.4.7:22-172.24.4.1:38966.service - OpenSSH per-connection server daemon (172.24.4.1:38966). Jul 9 09:29:08.677741 systemd-logind[1521]: Removed session 4. Jul 9 09:29:08.807963 coreos-metadata[1588]: Jul 09 09:29:08.807 INFO Fetch successful Jul 9 09:29:08.807963 coreos-metadata[1588]: Jul 09 09:29:08.807 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 9 09:29:08.821914 coreos-metadata[1588]: Jul 09 09:29:08.821 INFO Fetch successful Jul 9 09:29:08.828781 unknown[1588]: wrote ssh authorized keys file for user: core Jul 9 09:29:08.893705 update-ssh-keys[1729]: Updated "/home/core/.ssh/authorized_keys" Jul 9 09:29:08.896868 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 9 09:29:08.901185 systemd[1]: Finished sshkeys.service. Jul 9 09:29:09.007170 coreos-metadata[1509]: Jul 09 09:29:09.007 INFO Fetch successful Jul 9 09:29:09.007170 coreos-metadata[1509]: Jul 09 09:29:09.007 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 09:29:09.022910 coreos-metadata[1509]: Jul 09 09:29:09.022 INFO Fetch successful Jul 9 09:29:09.022910 coreos-metadata[1509]: Jul 09 09:29:09.022 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 9 09:29:09.036477 coreos-metadata[1509]: Jul 09 09:29:09.036 INFO Fetch successful Jul 9 09:29:09.036477 coreos-metadata[1509]: Jul 09 09:29:09.036 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 9 09:29:09.049676 coreos-metadata[1509]: Jul 09 09:29:09.049 INFO Fetch successful Jul 9 09:29:09.050083 coreos-metadata[1509]: Jul 09 09:29:09.049 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 9 09:29:09.063356 coreos-metadata[1509]: Jul 09 09:29:09.063 INFO Fetch successful Jul 9 09:29:09.063356 coreos-metadata[1509]: Jul 09 09:29:09.063 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 9 09:29:09.076278 coreos-metadata[1509]: Jul 09 09:29:09.076 INFO Fetch successful Jul 9 09:29:09.128188 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 09:29:09.130178 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 09:29:09.131111 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 09:29:09.132580 systemd[1]: Startup finished in 3.799s (kernel) + 18.904s (initrd) + 11.541s (userspace) = 34.244s. Jul 9 09:29:10.073411 sshd[1726]: Accepted publickey for core from 172.24.4.1 port 38966 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:10.076508 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:10.090747 systemd-logind[1521]: New session 5 of user core. Jul 9 09:29:10.101989 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 09:29:10.714694 sshd[1738]: Connection closed by 172.24.4.1 port 38966 Jul 9 09:29:10.714700 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:10.722335 systemd[1]: sshd@2-172.24.4.7:22-172.24.4.1:38966.service: Deactivated successfully. Jul 9 09:29:10.726006 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 09:29:10.729471 systemd-logind[1521]: Session 5 logged out. Waiting for processes to exit. Jul 9 09:29:10.733217 systemd-logind[1521]: Removed session 5. Jul 9 09:29:15.737510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 09:29:15.741011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:16.182969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:16.201608 (kubelet)[1751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:29:16.336759 kubelet[1751]: E0709 09:29:16.336542 1751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:29:16.344684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:29:16.345027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:29:16.346095 systemd[1]: kubelet.service: Consumed 415ms CPU time, 108.7M memory peak. Jul 9 09:29:20.733148 systemd[1]: Started sshd@3-172.24.4.7:22-172.24.4.1:47048.service - OpenSSH per-connection server daemon (172.24.4.1:47048). Jul 9 09:29:21.746070 sshd[1759]: Accepted publickey for core from 172.24.4.1 port 47048 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:21.748983 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:21.760749 systemd-logind[1521]: New session 6 of user core. Jul 9 09:29:21.765961 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 09:29:22.377743 sshd[1762]: Connection closed by 172.24.4.1 port 47048 Jul 9 09:29:22.378767 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:22.394754 systemd[1]: sshd@3-172.24.4.7:22-172.24.4.1:47048.service: Deactivated successfully. Jul 9 09:29:22.400218 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 09:29:22.402600 systemd-logind[1521]: Session 6 logged out. Waiting for processes to exit. Jul 9 09:29:22.409346 systemd[1]: Started sshd@4-172.24.4.7:22-172.24.4.1:47056.service - OpenSSH per-connection server daemon (172.24.4.1:47056). Jul 9 09:29:22.412073 systemd-logind[1521]: Removed session 6. Jul 9 09:29:23.530833 sshd[1768]: Accepted publickey for core from 172.24.4.1 port 47056 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:23.533740 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:23.546746 systemd-logind[1521]: New session 7 of user core. Jul 9 09:29:23.553916 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 09:29:24.270408 sshd[1771]: Connection closed by 172.24.4.1 port 47056 Jul 9 09:29:24.271703 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:24.285039 systemd[1]: sshd@4-172.24.4.7:22-172.24.4.1:47056.service: Deactivated successfully. Jul 9 09:29:24.288373 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 09:29:24.290312 systemd-logind[1521]: Session 7 logged out. Waiting for processes to exit. Jul 9 09:29:24.297201 systemd[1]: Started sshd@5-172.24.4.7:22-172.24.4.1:52908.service - OpenSSH per-connection server daemon (172.24.4.1:52908). Jul 9 09:29:24.299271 systemd-logind[1521]: Removed session 7. Jul 9 09:29:25.452418 sshd[1777]: Accepted publickey for core from 172.24.4.1 port 52908 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:25.455328 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:25.468748 systemd-logind[1521]: New session 8 of user core. Jul 9 09:29:25.475990 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 09:29:26.105199 sshd[1780]: Connection closed by 172.24.4.1 port 52908 Jul 9 09:29:26.104893 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:26.121197 systemd[1]: sshd@5-172.24.4.7:22-172.24.4.1:52908.service: Deactivated successfully. Jul 9 09:29:26.125082 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 09:29:26.127288 systemd-logind[1521]: Session 8 logged out. Waiting for processes to exit. Jul 9 09:29:26.133257 systemd[1]: Started sshd@6-172.24.4.7:22-172.24.4.1:52920.service - OpenSSH per-connection server daemon (172.24.4.1:52920). Jul 9 09:29:26.136210 systemd-logind[1521]: Removed session 8. Jul 9 09:29:26.487593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 09:29:26.492058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:26.926458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:26.950225 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:29:27.092955 kubelet[1797]: E0709 09:29:27.092826 1797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:29:27.098873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:29:27.099209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:29:27.100299 systemd[1]: kubelet.service: Consumed 487ms CPU time, 110.4M memory peak. Jul 9 09:29:27.615813 sshd[1786]: Accepted publickey for core from 172.24.4.1 port 52920 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:27.619389 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:27.632744 systemd-logind[1521]: New session 9 of user core. Jul 9 09:29:27.640965 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 09:29:28.115928 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 09:29:28.116560 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:29:28.137499 sudo[1805]: pam_unix(sudo:session): session closed for user root Jul 9 09:29:28.352687 sshd[1804]: Connection closed by 172.24.4.1 port 52920 Jul 9 09:29:28.354043 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:28.367401 systemd[1]: sshd@6-172.24.4.7:22-172.24.4.1:52920.service: Deactivated successfully. Jul 9 09:29:28.371329 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 09:29:28.373797 systemd-logind[1521]: Session 9 logged out. Waiting for processes to exit. Jul 9 09:29:28.381084 systemd[1]: Started sshd@7-172.24.4.7:22-172.24.4.1:52928.service - OpenSSH per-connection server daemon (172.24.4.1:52928). Jul 9 09:29:28.383575 systemd-logind[1521]: Removed session 9. Jul 9 09:29:29.501870 sshd[1811]: Accepted publickey for core from 172.24.4.1 port 52928 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:29.505043 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:29.520889 systemd-logind[1521]: New session 10 of user core. Jul 9 09:29:29.535958 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 09:29:29.974903 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 09:29:29.975212 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:29:29.986516 sudo[1816]: pam_unix(sudo:session): session closed for user root Jul 9 09:29:29.996089 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 09:29:29.996540 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:29:30.014760 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:29:30.082209 augenrules[1838]: No rules Jul 9 09:29:30.083595 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:29:30.083944 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:29:30.086803 sudo[1815]: pam_unix(sudo:session): session closed for user root Jul 9 09:29:30.241907 sshd[1814]: Connection closed by 172.24.4.1 port 52928 Jul 9 09:29:30.243368 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Jul 9 09:29:30.259508 systemd[1]: sshd@7-172.24.4.7:22-172.24.4.1:52928.service: Deactivated successfully. Jul 9 09:29:30.263895 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 09:29:30.268935 systemd-logind[1521]: Session 10 logged out. Waiting for processes to exit. Jul 9 09:29:30.274965 systemd[1]: Started sshd@8-172.24.4.7:22-172.24.4.1:52930.service - OpenSSH per-connection server daemon (172.24.4.1:52930). Jul 9 09:29:30.279849 systemd-logind[1521]: Removed session 10. Jul 9 09:29:31.447726 sshd[1847]: Accepted publickey for core from 172.24.4.1 port 52930 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:29:31.450889 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:29:31.464780 systemd-logind[1521]: New session 11 of user core. Jul 9 09:29:31.477326 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 09:29:31.895407 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 09:29:31.897008 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:29:32.760544 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 09:29:32.771891 (dockerd)[1870]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 09:29:33.215836 dockerd[1870]: time="2025-07-09T09:29:33.215014039Z" level=info msg="Starting up" Jul 9 09:29:33.216289 dockerd[1870]: time="2025-07-09T09:29:33.216267371Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 09:29:33.241984 dockerd[1870]: time="2025-07-09T09:29:33.241910357Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 09:29:33.284957 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1001169510-merged.mount: Deactivated successfully. Jul 9 09:29:33.305902 systemd[1]: var-lib-docker-metacopy\x2dcheck2960691987-merged.mount: Deactivated successfully. Jul 9 09:29:33.343764 dockerd[1870]: time="2025-07-09T09:29:33.343643027Z" level=info msg="Loading containers: start." Jul 9 09:29:33.366698 kernel: Initializing XFRM netlink socket Jul 9 09:29:33.793786 systemd-networkd[1435]: docker0: Link UP Jul 9 09:29:33.804307 dockerd[1870]: time="2025-07-09T09:29:33.804176661Z" level=info msg="Loading containers: done." Jul 9 09:29:33.848605 dockerd[1870]: time="2025-07-09T09:29:33.848340211Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 09:29:33.849437 dockerd[1870]: time="2025-07-09T09:29:33.849145081Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 09:29:33.849994 dockerd[1870]: time="2025-07-09T09:29:33.849690705Z" level=info msg="Initializing buildkit" Jul 9 09:29:33.911310 dockerd[1870]: time="2025-07-09T09:29:33.910997471Z" level=info msg="Completed buildkit initialization" Jul 9 09:29:33.929291 dockerd[1870]: time="2025-07-09T09:29:33.929144408Z" level=info msg="Daemon has completed initialization" Jul 9 09:29:33.929834 dockerd[1870]: time="2025-07-09T09:29:33.929619735Z" level=info msg="API listen on /run/docker.sock" Jul 9 09:29:33.930403 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 09:29:34.272035 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2769202681-merged.mount: Deactivated successfully. Jul 9 09:29:35.641284 containerd[1547]: time="2025-07-09T09:29:35.641074435Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 09:29:36.364295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605471619.mount: Deactivated successfully. Jul 9 09:29:37.237342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 09:29:37.242071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:37.709948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:37.728300 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:29:37.811720 kubelet[2141]: E0709 09:29:37.811639 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:29:37.814451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:29:37.814997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:29:37.815747 systemd[1]: kubelet.service: Consumed 247ms CPU time, 109.9M memory peak. Jul 9 09:29:38.571312 containerd[1547]: time="2025-07-09T09:29:38.571251292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:38.573177 containerd[1547]: time="2025-07-09T09:29:38.573145052Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 9 09:29:38.574837 containerd[1547]: time="2025-07-09T09:29:38.574777779Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:38.581346 containerd[1547]: time="2025-07-09T09:29:38.581256187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:38.583447 containerd[1547]: time="2025-07-09T09:29:38.582699415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.941329959s" Jul 9 09:29:38.583447 containerd[1547]: time="2025-07-09T09:29:38.582768246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 9 09:29:38.585139 containerd[1547]: time="2025-07-09T09:29:38.585030382Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 09:29:40.566677 containerd[1547]: time="2025-07-09T09:29:40.566500108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:40.568156 containerd[1547]: time="2025-07-09T09:29:40.568102281Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 9 09:29:40.569582 containerd[1547]: time="2025-07-09T09:29:40.569525039Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:40.572826 containerd[1547]: time="2025-07-09T09:29:40.572743278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:40.573978 containerd[1547]: time="2025-07-09T09:29:40.573825220Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.988749078s" Jul 9 09:29:40.573978 containerd[1547]: time="2025-07-09T09:29:40.573867224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 9 09:29:40.574834 containerd[1547]: time="2025-07-09T09:29:40.574812587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 09:29:42.329520 containerd[1547]: time="2025-07-09T09:29:42.329370157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:42.330671 containerd[1547]: time="2025-07-09T09:29:42.330604966Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 9 09:29:42.332002 containerd[1547]: time="2025-07-09T09:29:42.331922501Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:42.335356 containerd[1547]: time="2025-07-09T09:29:42.335282800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:42.336643 containerd[1547]: time="2025-07-09T09:29:42.336276773Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.761239703s" Jul 9 09:29:42.336643 containerd[1547]: time="2025-07-09T09:29:42.336309771Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 9 09:29:42.336975 containerd[1547]: time="2025-07-09T09:29:42.336935916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 09:29:43.891329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076771734.mount: Deactivated successfully. Jul 9 09:29:44.471757 containerd[1547]: time="2025-07-09T09:29:44.471689691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:44.473573 containerd[1547]: time="2025-07-09T09:29:44.473520456Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 9 09:29:44.475275 containerd[1547]: time="2025-07-09T09:29:44.475234082Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:44.477592 containerd[1547]: time="2025-07-09T09:29:44.477523111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:44.478298 containerd[1547]: time="2025-07-09T09:29:44.478112389Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.141068241s" Jul 9 09:29:44.478298 containerd[1547]: time="2025-07-09T09:29:44.478147862Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 9 09:29:44.478891 containerd[1547]: time="2025-07-09T09:29:44.478871659Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 09:29:45.191038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974137032.mount: Deactivated successfully. Jul 9 09:29:45.901799 update_engine[1525]: I20250709 09:29:45.901271 1525 update_attempter.cc:509] Updating boot flags... Jul 9 09:29:46.669295 containerd[1547]: time="2025-07-09T09:29:46.669135288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:46.670701 containerd[1547]: time="2025-07-09T09:29:46.670673982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 9 09:29:46.671895 containerd[1547]: time="2025-07-09T09:29:46.671818380Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:46.676699 containerd[1547]: time="2025-07-09T09:29:46.676405449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:46.677644 containerd[1547]: time="2025-07-09T09:29:46.677427057Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.1983024s" Jul 9 09:29:46.677644 containerd[1547]: time="2025-07-09T09:29:46.677531063Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 09:29:46.678908 containerd[1547]: time="2025-07-09T09:29:46.678888262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 09:29:47.241033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824924623.mount: Deactivated successfully. Jul 9 09:29:47.254661 containerd[1547]: time="2025-07-09T09:29:47.254389675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:29:47.256460 containerd[1547]: time="2025-07-09T09:29:47.256364129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 9 09:29:47.258289 containerd[1547]: time="2025-07-09T09:29:47.258171894Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:29:47.263562 containerd[1547]: time="2025-07-09T09:29:47.263403084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:29:47.266397 containerd[1547]: time="2025-07-09T09:29:47.266311885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 587.296565ms" Jul 9 09:29:47.266565 containerd[1547]: time="2025-07-09T09:29:47.266391718Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 09:29:47.268220 containerd[1547]: time="2025-07-09T09:29:47.267795998Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 09:29:47.851311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 09:29:47.861689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:47.894911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942280795.mount: Deactivated successfully. Jul 9 09:29:48.533850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:48.549020 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:29:48.645336 kubelet[2256]: E0709 09:29:48.645200 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:29:48.649911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:29:48.650059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:29:48.650733 systemd[1]: kubelet.service: Consumed 538ms CPU time, 110.3M memory peak. Jul 9 09:29:50.995494 containerd[1547]: time="2025-07-09T09:29:50.995417544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:50.997273 containerd[1547]: time="2025-07-09T09:29:50.996945188Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 9 09:29:50.998421 containerd[1547]: time="2025-07-09T09:29:50.998388799Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:51.002038 containerd[1547]: time="2025-07-09T09:29:51.002003401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:29:51.003680 containerd[1547]: time="2025-07-09T09:29:51.003604464Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.735478618s" Jul 9 09:29:51.003801 containerd[1547]: time="2025-07-09T09:29:51.003780613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 9 09:29:54.853823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:54.855448 systemd[1]: kubelet.service: Consumed 538ms CPU time, 110.3M memory peak. Jul 9 09:29:54.864129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:54.931189 systemd[1]: Reload requested from client PID 2333 ('systemctl') (unit session-11.scope)... Jul 9 09:29:54.931260 systemd[1]: Reloading... Jul 9 09:29:55.099917 zram_generator::config[2384]: No configuration found. Jul 9 09:29:55.274180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:29:55.433178 systemd[1]: Reloading finished in 500 ms. Jul 9 09:29:55.545450 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 09:29:55.545547 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 09:29:55.546084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:55.546150 systemd[1]: kubelet.service: Consumed 253ms CPU time, 98.3M memory peak. Jul 9 09:29:55.548185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:29:55.807033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:29:55.818249 (kubelet)[2444]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 09:29:55.918747 kubelet[2444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:29:55.920085 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 09:29:55.920085 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:29:56.004912 kubelet[2444]: I0709 09:29:56.004258 2444 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 09:29:56.890783 kubelet[2444]: I0709 09:29:56.890696 2444 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 09:29:56.890783 kubelet[2444]: I0709 09:29:56.890739 2444 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 09:29:56.891208 kubelet[2444]: I0709 09:29:56.891035 2444 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 09:29:56.925333 kubelet[2444]: E0709 09:29:56.925186 2444 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:56.926866 kubelet[2444]: I0709 09:29:56.926070 2444 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:29:56.944699 kubelet[2444]: I0709 09:29:56.944590 2444 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 09:29:56.951432 kubelet[2444]: I0709 09:29:56.951358 2444 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 09:29:56.952581 kubelet[2444]: I0709 09:29:56.952459 2444 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 09:29:56.952774 kubelet[2444]: I0709 09:29:56.952644 2444 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 09:29:56.952944 kubelet[2444]: I0709 09:29:56.952674 2444 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4386-0-0-w-15e87cee3a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 09:29:56.953519 kubelet[2444]: I0709 09:29:56.952955 2444 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 09:29:56.953519 kubelet[2444]: I0709 09:29:56.952967 2444 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 09:29:56.953519 kubelet[2444]: I0709 09:29:56.953143 2444 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:29:56.957941 kubelet[2444]: I0709 09:29:56.957874 2444 kubelet.go:408] "Attempting to sync node with API server" Jul 9 09:29:56.957941 kubelet[2444]: I0709 09:29:56.957918 2444 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 09:29:56.958136 kubelet[2444]: I0709 09:29:56.958006 2444 kubelet.go:314] "Adding apiserver pod source" Jul 9 09:29:56.958136 kubelet[2444]: I0709 09:29:56.958063 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 09:29:56.962383 kubelet[2444]: W0709 09:29:56.962244 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4386-0-0-w-15e87cee3a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:56.962383 kubelet[2444]: E0709 09:29:56.962357 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4386-0-0-w-15e87cee3a.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:56.966713 kubelet[2444]: I0709 09:29:56.965675 2444 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 09:29:56.967057 kubelet[2444]: I0709 09:29:56.967010 2444 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 09:29:56.967274 kubelet[2444]: W0709 09:29:56.967244 2444 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 09:29:56.969645 kubelet[2444]: W0709 09:29:56.969288 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:56.969645 kubelet[2444]: E0709 09:29:56.969422 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:56.972909 kubelet[2444]: I0709 09:29:56.972886 2444 server.go:1274] "Started kubelet" Jul 9 09:29:56.974750 kubelet[2444]: I0709 09:29:56.974598 2444 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 09:29:56.979765 kubelet[2444]: I0709 09:29:56.979719 2444 server.go:449] "Adding debug handlers to kubelet server" Jul 9 09:29:56.980909 kubelet[2444]: I0709 09:29:56.980867 2444 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 09:29:56.981459 kubelet[2444]: I0709 09:29:56.981438 2444 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 09:29:56.983714 kubelet[2444]: E0709 09:29:56.981936 2444 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.7:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4386-0-0-w-15e87cee3a.novalocal.18508b4822153977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4386-0-0-w-15e87cee3a.novalocal,UID:ci-4386-0-0-w-15e87cee3a.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4386-0-0-w-15e87cee3a.novalocal,},FirstTimestamp:2025-07-09 09:29:56.972845431 +0000 UTC m=+1.144076449,LastTimestamp:2025-07-09 09:29:56.972845431 +0000 UTC m=+1.144076449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4386-0-0-w-15e87cee3a.novalocal,}" Jul 9 09:29:56.994677 kubelet[2444]: I0709 09:29:56.993322 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 09:29:56.994677 kubelet[2444]: I0709 09:29:56.993494 2444 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 09:29:56.994677 kubelet[2444]: I0709 09:29:56.994174 2444 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 09:29:56.996164 kubelet[2444]: I0709 09:29:56.996147 2444 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 09:29:56.996338 kubelet[2444]: I0709 09:29:56.996323 2444 reconciler.go:26] "Reconciler: start to sync state" Jul 9 09:29:56.996925 kubelet[2444]: W0709 09:29:56.996886 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:56.997018 kubelet[2444]: E0709 09:29:56.997000 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:56.997267 kubelet[2444]: E0709 09:29:56.997247 2444 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 09:29:56.998018 kubelet[2444]: E0709 09:29:56.998001 2444 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" Jul 9 09:29:56.998736 kubelet[2444]: E0709 09:29:56.998697 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4386-0-0-w-15e87cee3a.novalocal?timeout=10s\": dial tcp 172.24.4.7:6443: connect: connection refused" interval="200ms" Jul 9 09:29:56.999464 kubelet[2444]: I0709 09:29:56.999446 2444 factory.go:221] Registration of the systemd container factory successfully Jul 9 09:29:57.000478 kubelet[2444]: I0709 09:29:57.000450 2444 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 09:29:57.002553 kubelet[2444]: I0709 09:29:57.002537 2444 factory.go:221] Registration of the containerd container factory successfully Jul 9 09:29:57.041536 kubelet[2444]: I0709 09:29:57.041493 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 09:29:57.045541 kubelet[2444]: I0709 09:29:57.045477 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 09:29:57.045753 kubelet[2444]: I0709 09:29:57.045682 2444 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 09:29:57.045827 kubelet[2444]: I0709 09:29:57.045816 2444 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 09:29:57.046034 kubelet[2444]: E0709 09:29:57.045944 2444 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 09:29:57.047778 kubelet[2444]: W0709 09:29:57.047487 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:57.047895 kubelet[2444]: E0709 09:29:57.047860 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:57.048684 kubelet[2444]: I0709 09:29:57.048577 2444 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 09:29:57.048684 kubelet[2444]: I0709 09:29:57.048595 2444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 09:29:57.048684 kubelet[2444]: I0709 09:29:57.048638 2444 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:29:57.055403 kubelet[2444]: I0709 09:29:57.055358 2444 policy_none.go:49] "None policy: Start" Jul 9 09:29:57.056704 kubelet[2444]: I0709 09:29:57.056602 2444 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 09:29:57.056902 kubelet[2444]: I0709 09:29:57.056732 2444 state_mem.go:35] "Initializing new in-memory state store" Jul 9 09:29:57.073341 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 09:29:57.091382 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 09:29:57.095978 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 09:29:57.098492 kubelet[2444]: E0709 09:29:57.098448 2444 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" Jul 9 09:29:57.104665 kubelet[2444]: I0709 09:29:57.104600 2444 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 09:29:57.105435 kubelet[2444]: I0709 09:29:57.104837 2444 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 09:29:57.105435 kubelet[2444]: I0709 09:29:57.104868 2444 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 09:29:57.105435 kubelet[2444]: I0709 09:29:57.105354 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 09:29:57.108175 kubelet[2444]: E0709 09:29:57.108139 2444 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" Jul 9 09:29:57.179771 systemd[1]: Created slice kubepods-burstable-pod5331be543c8c4ba5081e43e8019a8bc6.slice - libcontainer container kubepods-burstable-pod5331be543c8c4ba5081e43e8019a8bc6.slice. Jul 9 09:29:57.198118 kubelet[2444]: I0709 09:29:57.197878 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-kubeconfig\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.199765 kubelet[2444]: E0709 09:29:57.199684 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4386-0-0-w-15e87cee3a.novalocal?timeout=10s\": dial tcp 172.24.4.7:6443: connect: connection refused" interval="400ms" Jul 9 09:29:57.200131 kubelet[2444]: I0709 09:29:57.199776 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-ca-certs\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200131 kubelet[2444]: I0709 09:29:57.199860 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-k8s-certs\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200131 kubelet[2444]: I0709 09:29:57.199911 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-flexvolume-dir\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200131 kubelet[2444]: I0709 09:29:57.199963 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200417 kubelet[2444]: I0709 09:29:57.200009 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e8370a7937fb8ec61cd84b224b2f239-kubeconfig\") pod \"kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"3e8370a7937fb8ec61cd84b224b2f239\") " pod="kube-system/kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200417 kubelet[2444]: I0709 09:29:57.200053 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-k8s-certs\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200417 kubelet[2444]: I0709 09:29:57.200098 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.200417 kubelet[2444]: I0709 09:29:57.200139 2444 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-ca-certs\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.211610 systemd[1]: Created slice kubepods-burstable-pod3e8370a7937fb8ec61cd84b224b2f239.slice - libcontainer container kubepods-burstable-pod3e8370a7937fb8ec61cd84b224b2f239.slice. Jul 9 09:29:57.214024 kubelet[2444]: I0709 09:29:57.213124 2444 kubelet_node_status.go:72] "Attempting to register node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.215793 kubelet[2444]: E0709 09:29:57.215345 2444 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.7:6443/api/v1/nodes\": dial tcp 172.24.4.7:6443: connect: connection refused" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.231129 systemd[1]: Created slice kubepods-burstable-pod49096be97c1fa4d882568a7a369c4c56.slice - libcontainer container kubepods-burstable-pod49096be97c1fa4d882568a7a369c4c56.slice. Jul 9 09:29:57.419764 kubelet[2444]: I0709 09:29:57.419687 2444 kubelet_node_status.go:72] "Attempting to register node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.421037 kubelet[2444]: E0709 09:29:57.420975 2444 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.7:6443/api/v1/nodes\": dial tcp 172.24.4.7:6443: connect: connection refused" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.506556 containerd[1547]: time="2025-07-09T09:29:57.506319017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:5331be543c8c4ba5081e43e8019a8bc6,Namespace:kube-system,Attempt:0,}" Jul 9 09:29:57.527990 containerd[1547]: time="2025-07-09T09:29:57.527887637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:3e8370a7937fb8ec61cd84b224b2f239,Namespace:kube-system,Attempt:0,}" Jul 9 09:29:57.541359 containerd[1547]: time="2025-07-09T09:29:57.541274278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:49096be97c1fa4d882568a7a369c4c56,Namespace:kube-system,Attempt:0,}" Jul 9 09:29:57.605697 kubelet[2444]: E0709 09:29:57.602324 2444 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4386-0-0-w-15e87cee3a.novalocal?timeout=10s\": dial tcp 172.24.4.7:6443: connect: connection refused" interval="800ms" Jul 9 09:29:57.607854 containerd[1547]: time="2025-07-09T09:29:57.607761818Z" level=info msg="connecting to shim 1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a" address="unix:///run/containerd/s/0918eedd370bb233e3e839cbf0e59b7c339bf92a1ad9f16ceaec275f389ce431" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:29:57.632987 containerd[1547]: time="2025-07-09T09:29:57.632805340Z" level=info msg="connecting to shim 44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d" address="unix:///run/containerd/s/390599baebe21585863b629639f870db2ff5652fe61d94cb263ac5cd6f004117" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:29:57.671308 containerd[1547]: time="2025-07-09T09:29:57.671217279Z" level=info msg="connecting to shim 742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2" address="unix:///run/containerd/s/0a545e749180c8a8d8bb58d10e6e3865bbdf98b048f8811e9d568847e9911da1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:29:57.677973 systemd[1]: Started cri-containerd-44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d.scope - libcontainer container 44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d. Jul 9 09:29:57.699868 systemd[1]: Started cri-containerd-1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a.scope - libcontainer container 1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a. Jul 9 09:29:57.709795 systemd[1]: Started cri-containerd-742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2.scope - libcontainer container 742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2. Jul 9 09:29:57.788848 containerd[1547]: time="2025-07-09T09:29:57.788696641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:3e8370a7937fb8ec61cd84b224b2f239,Namespace:kube-system,Attempt:0,} returns sandbox id \"44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d\"" Jul 9 09:29:57.793832 containerd[1547]: time="2025-07-09T09:29:57.793790559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:49096be97c1fa4d882568a7a369c4c56,Namespace:kube-system,Attempt:0,} returns sandbox id \"742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2\"" Jul 9 09:29:57.796498 containerd[1547]: time="2025-07-09T09:29:57.796462928Z" level=info msg="CreateContainer within sandbox \"44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 09:29:57.799198 containerd[1547]: time="2025-07-09T09:29:57.799166425Z" level=info msg="CreateContainer within sandbox \"742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 09:29:57.824584 kubelet[2444]: I0709 09:29:57.824516 2444 kubelet_node_status.go:72] "Attempting to register node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.825023 kubelet[2444]: E0709 09:29:57.824991 2444 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.7:6443/api/v1/nodes\": dial tcp 172.24.4.7:6443: connect: connection refused" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:29:57.828736 containerd[1547]: time="2025-07-09T09:29:57.828662249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal,Uid:5331be543c8c4ba5081e43e8019a8bc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a\"" Jul 9 09:29:57.830562 containerd[1547]: time="2025-07-09T09:29:57.830526516Z" level=info msg="Container c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:29:57.832913 containerd[1547]: time="2025-07-09T09:29:57.832798612Z" level=info msg="CreateContainer within sandbox \"1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 09:29:57.835150 containerd[1547]: time="2025-07-09T09:29:57.835094842Z" level=info msg="Container a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:29:57.856804 containerd[1547]: time="2025-07-09T09:29:57.856731707Z" level=info msg="CreateContainer within sandbox \"44daf5a1e73b28a7a7b6db2cc7c841245f8ebecb6e27a1567fcfdebd0198d26d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823\"" Jul 9 09:29:57.857920 containerd[1547]: time="2025-07-09T09:29:57.857884261Z" level=info msg="StartContainer for \"c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823\"" Jul 9 09:29:57.861255 containerd[1547]: time="2025-07-09T09:29:57.861205159Z" level=info msg="connecting to shim c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823" address="unix:///run/containerd/s/390599baebe21585863b629639f870db2ff5652fe61d94cb263ac5cd6f004117" protocol=ttrpc version=3 Jul 9 09:29:57.864740 containerd[1547]: time="2025-07-09T09:29:57.864701358Z" level=info msg="Container 26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:29:57.875648 containerd[1547]: time="2025-07-09T09:29:57.874614682Z" level=info msg="CreateContainer within sandbox \"742dadb023a655e65f533fc8cc1a6cb6d302be16642f4a33cc7184f752c343a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028\"" Jul 9 09:29:57.876803 containerd[1547]: time="2025-07-09T09:29:57.876769292Z" level=info msg="StartContainer for \"a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028\"" Jul 9 09:29:57.886475 containerd[1547]: time="2025-07-09T09:29:57.886430914Z" level=info msg="connecting to shim a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028" address="unix:///run/containerd/s/0a545e749180c8a8d8bb58d10e6e3865bbdf98b048f8811e9d568847e9911da1" protocol=ttrpc version=3 Jul 9 09:29:57.890907 systemd[1]: Started cri-containerd-c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823.scope - libcontainer container c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823. Jul 9 09:29:57.914329 containerd[1547]: time="2025-07-09T09:29:57.914216192Z" level=info msg="CreateContainer within sandbox \"1b01b1ad8efa3e6fcdb3b910d60159f981c834498d775b80aa94e5634ed2e05a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e\"" Jul 9 09:29:57.915105 containerd[1547]: time="2025-07-09T09:29:57.915060931Z" level=info msg="StartContainer for \"26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e\"" Jul 9 09:29:57.920229 containerd[1547]: time="2025-07-09T09:29:57.920164598Z" level=info msg="connecting to shim 26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e" address="unix:///run/containerd/s/0918eedd370bb233e3e839cbf0e59b7c339bf92a1ad9f16ceaec275f389ce431" protocol=ttrpc version=3 Jul 9 09:29:57.927810 systemd[1]: Started cri-containerd-a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028.scope - libcontainer container a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028. Jul 9 09:29:57.962816 systemd[1]: Started cri-containerd-26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e.scope - libcontainer container 26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e. Jul 9 09:29:57.979720 kubelet[2444]: W0709 09:29:57.979608 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:57.980197 kubelet[2444]: E0709 09:29:57.979736 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:57.999052 containerd[1547]: time="2025-07-09T09:29:57.998998422Z" level=info msg="StartContainer for \"c177e22a1b8563bbe67e5017a63a7451123a58a44232c7d29a59c376577a6823\" returns successfully" Jul 9 09:29:58.018422 containerd[1547]: time="2025-07-09T09:29:58.018286137Z" level=info msg="StartContainer for \"a6e0caefe3671dd9d47cfae701465de44ca33c8649308da276b2fab80be4e028\" returns successfully" Jul 9 09:29:58.074347 kubelet[2444]: W0709 09:29:58.072935 2444 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.7:6443: connect: connection refused Jul 9 09:29:58.074347 kubelet[2444]: E0709 09:29:58.073033 2444 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.7:6443: connect: connection refused" logger="UnhandledError" Jul 9 09:29:58.088225 containerd[1547]: time="2025-07-09T09:29:58.088167504Z" level=info msg="StartContainer for \"26eb25fb0310cf2cc333b0c2c1571c01cc93a53cccfd2d3b4534b3ff728a652e\" returns successfully" Jul 9 09:29:58.631788 kubelet[2444]: I0709 09:29:58.631726 2444 kubelet_node_status.go:72] "Attempting to register node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:00.324908 kubelet[2444]: E0709 09:30:00.324664 2444 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:00.440201 kubelet[2444]: I0709 09:30:00.440097 2444 kubelet_node_status.go:75] "Successfully registered node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:00.441029 kubelet[2444]: E0709 09:30:00.440327 2444 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4386-0-0-w-15e87cee3a.novalocal\": node \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" Jul 9 09:30:00.973007 kubelet[2444]: I0709 09:30:00.972915 2444 apiserver.go:52] "Watching apiserver" Jul 9 09:30:00.997385 kubelet[2444]: I0709 09:30:00.997248 2444 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 09:30:03.256191 systemd[1]: Reload requested from client PID 2717 ('systemctl') (unit session-11.scope)... Jul 9 09:30:03.256379 systemd[1]: Reloading... Jul 9 09:30:03.411758 zram_generator::config[2765]: No configuration found. Jul 9 09:30:03.543562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:30:03.709970 systemd[1]: Reloading finished in 450 ms. Jul 9 09:30:03.743243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:30:03.762447 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 09:30:03.763157 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:30:03.763410 systemd[1]: kubelet.service: Consumed 1.831s CPU time, 130.2M memory peak. Jul 9 09:30:03.767151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:30:04.286607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:30:04.304548 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 09:30:04.414474 kubelet[2827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:30:04.415662 kubelet[2827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 09:30:04.415662 kubelet[2827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:30:04.415662 kubelet[2827]: I0709 09:30:04.415083 2827 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 09:30:04.426149 kubelet[2827]: I0709 09:30:04.426102 2827 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 09:30:04.426337 kubelet[2827]: I0709 09:30:04.426326 2827 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 09:30:04.426735 kubelet[2827]: I0709 09:30:04.426718 2827 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 09:30:04.429264 kubelet[2827]: I0709 09:30:04.429244 2827 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 09:30:04.434205 kubelet[2827]: I0709 09:30:04.434157 2827 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:30:04.448265 kubelet[2827]: I0709 09:30:04.448199 2827 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 09:30:04.454907 kubelet[2827]: I0709 09:30:04.454871 2827 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 09:30:04.456587 kubelet[2827]: I0709 09:30:04.455907 2827 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 09:30:04.456587 kubelet[2827]: I0709 09:30:04.456070 2827 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 09:30:04.456587 kubelet[2827]: I0709 09:30:04.456101 2827 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4386-0-0-w-15e87cee3a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 09:30:04.456587 kubelet[2827]: I0709 09:30:04.456391 2827 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 09:30:04.457166 kubelet[2827]: I0709 09:30:04.456409 2827 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 09:30:04.457166 kubelet[2827]: I0709 09:30:04.456515 2827 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:30:04.457166 kubelet[2827]: I0709 09:30:04.456756 2827 kubelet.go:408] "Attempting to sync node with API server" Jul 9 09:30:04.457257 kubelet[2827]: I0709 09:30:04.457181 2827 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 09:30:04.457257 kubelet[2827]: I0709 09:30:04.457252 2827 kubelet.go:314] "Adding apiserver pod source" Jul 9 09:30:04.460029 kubelet[2827]: I0709 09:30:04.457288 2827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 09:30:04.463238 kubelet[2827]: I0709 09:30:04.463194 2827 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 09:30:04.466199 kubelet[2827]: I0709 09:30:04.466134 2827 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 09:30:04.468766 kubelet[2827]: I0709 09:30:04.468751 2827 server.go:1274] "Started kubelet" Jul 9 09:30:04.477474 kubelet[2827]: I0709 09:30:04.477386 2827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 09:30:04.490255 kubelet[2827]: I0709 09:30:04.489365 2827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 09:30:04.494756 kubelet[2827]: I0709 09:30:04.493744 2827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 09:30:04.494756 kubelet[2827]: I0709 09:30:04.494234 2827 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 09:30:04.494756 kubelet[2827]: I0709 09:30:04.494610 2827 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 09:30:04.498514 kubelet[2827]: I0709 09:30:04.498475 2827 server.go:449] "Adding debug handlers to kubelet server" Jul 9 09:30:04.499411 kubelet[2827]: I0709 09:30:04.499216 2827 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 09:30:04.499495 kubelet[2827]: E0709 09:30:04.499467 2827 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4386-0-0-w-15e87cee3a.novalocal\" not found" Jul 9 09:30:04.502388 kubelet[2827]: I0709 09:30:04.502309 2827 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 09:30:04.502599 kubelet[2827]: I0709 09:30:04.502566 2827 reconciler.go:26] "Reconciler: start to sync state" Jul 9 09:30:04.519384 kubelet[2827]: E0709 09:30:04.519149 2827 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 09:30:04.523569 kubelet[2827]: I0709 09:30:04.523537 2827 factory.go:221] Registration of the containerd container factory successfully Jul 9 09:30:04.524343 kubelet[2827]: I0709 09:30:04.523710 2827 factory.go:221] Registration of the systemd container factory successfully Jul 9 09:30:04.524816 kubelet[2827]: I0709 09:30:04.524788 2827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 09:30:04.537961 kubelet[2827]: I0709 09:30:04.537881 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 09:30:04.540211 kubelet[2827]: I0709 09:30:04.539153 2827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 09:30:04.540211 kubelet[2827]: I0709 09:30:04.539230 2827 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 09:30:04.540211 kubelet[2827]: I0709 09:30:04.539299 2827 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 09:30:04.540211 kubelet[2827]: E0709 09:30:04.539387 2827 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 09:30:04.639482 kubelet[2827]: E0709 09:30:04.639428 2827 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.644988 2827 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.645003 2827 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.645033 2827 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.645212 2827 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.645224 2827 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 09:30:04.645392 kubelet[2827]: I0709 09:30:04.645253 2827 policy_none.go:49] "None policy: Start" Jul 9 09:30:04.645975 kubelet[2827]: I0709 09:30:04.645949 2827 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 09:30:04.646039 kubelet[2827]: I0709 09:30:04.645987 2827 state_mem.go:35] "Initializing new in-memory state store" Jul 9 09:30:04.646179 kubelet[2827]: I0709 09:30:04.646134 2827 state_mem.go:75] "Updated machine memory state" Jul 9 09:30:04.656477 kubelet[2827]: I0709 09:30:04.656435 2827 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 09:30:04.660082 kubelet[2827]: I0709 09:30:04.658716 2827 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 09:30:04.660082 kubelet[2827]: I0709 09:30:04.659801 2827 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 09:30:04.660334 kubelet[2827]: I0709 09:30:04.660276 2827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 09:30:04.794730 kubelet[2827]: I0709 09:30:04.794601 2827 kubelet_node_status.go:72] "Attempting to register node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.806671 kubelet[2827]: I0709 09:30:04.806075 2827 kubelet_node_status.go:111] "Node was previously registered" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.806671 kubelet[2827]: I0709 09:30:04.806234 2827 kubelet_node_status.go:75] "Successfully registered node" node="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.861867 kubelet[2827]: W0709 09:30:04.861702 2827 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 09:30:04.868819 kubelet[2827]: W0709 09:30:04.868650 2827 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 09:30:04.869739 kubelet[2827]: W0709 09:30:04.869560 2827 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 09:30:04.904669 kubelet[2827]: I0709 09:30:04.904203 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-k8s-certs\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.904669 kubelet[2827]: I0709 09:30:04.904277 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.904669 kubelet[2827]: I0709 09:30:04.904330 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-flexvolume-dir\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.904669 kubelet[2827]: I0709 09:30:04.904363 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-k8s-certs\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.905008 kubelet[2827]: I0709 09:30:04.904393 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-kubeconfig\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.905008 kubelet[2827]: I0709 09:30:04.904424 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.905008 kubelet[2827]: I0709 09:30:04.904460 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e8370a7937fb8ec61cd84b224b2f239-kubeconfig\") pod \"kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"3e8370a7937fb8ec61cd84b224b2f239\") " pod="kube-system/kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.905008 kubelet[2827]: I0709 09:30:04.904493 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49096be97c1fa4d882568a7a369c4c56-ca-certs\") pod \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"49096be97c1fa4d882568a7a369c4c56\") " pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:04.905209 kubelet[2827]: I0709 09:30:04.904527 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5331be543c8c4ba5081e43e8019a8bc6-ca-certs\") pod \"kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal\" (UID: \"5331be543c8c4ba5081e43e8019a8bc6\") " pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:05.462428 kubelet[2827]: I0709 09:30:05.461843 2827 apiserver.go:52] "Watching apiserver" Jul 9 09:30:05.503485 kubelet[2827]: I0709 09:30:05.503423 2827 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 09:30:05.599810 kubelet[2827]: W0709 09:30:05.599761 2827 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 09:30:05.600018 kubelet[2827]: E0709 09:30:05.599900 2827 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:05.639890 kubelet[2827]: I0709 09:30:05.639802 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4386-0-0-w-15e87cee3a.novalocal" podStartSLOduration=1.63976229 podStartE2EDuration="1.63976229s" podCreationTimestamp="2025-07-09 09:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:30:05.626189422 +0000 UTC m=+1.297632471" watchObservedRunningTime="2025-07-09 09:30:05.63976229 +0000 UTC m=+1.311205329" Jul 9 09:30:05.651048 kubelet[2827]: I0709 09:30:05.650851 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4386-0-0-w-15e87cee3a.novalocal" podStartSLOduration=1.650804125 podStartE2EDuration="1.650804125s" podCreationTimestamp="2025-07-09 09:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:30:05.640085136 +0000 UTC m=+1.311528175" watchObservedRunningTime="2025-07-09 09:30:05.650804125 +0000 UTC m=+1.322247164" Jul 9 09:30:05.674962 kubelet[2827]: I0709 09:30:05.674400 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4386-0-0-w-15e87cee3a.novalocal" podStartSLOduration=1.674372474 podStartE2EDuration="1.674372474s" podCreationTimestamp="2025-07-09 09:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:30:05.651856262 +0000 UTC m=+1.323299311" watchObservedRunningTime="2025-07-09 09:30:05.674372474 +0000 UTC m=+1.345815513" Jul 9 09:30:08.585455 kubelet[2827]: I0709 09:30:08.585312 2827 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 09:30:08.586996 containerd[1547]: time="2025-07-09T09:30:08.586785022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 09:30:08.587856 kubelet[2827]: I0709 09:30:08.587337 2827 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 09:30:09.516681 systemd[1]: Created slice kubepods-besteffort-pod60bcfb17_ac90_4bd5_a9bb_8a3c00700aba.slice - libcontainer container kubepods-besteffort-pod60bcfb17_ac90_4bd5_a9bb_8a3c00700aba.slice. Jul 9 09:30:09.543083 kubelet[2827]: I0709 09:30:09.542939 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60bcfb17-ac90-4bd5-a9bb-8a3c00700aba-lib-modules\") pod \"kube-proxy-5t4pq\" (UID: \"60bcfb17-ac90-4bd5-a9bb-8a3c00700aba\") " pod="kube-system/kube-proxy-5t4pq" Jul 9 09:30:09.543528 kubelet[2827]: I0709 09:30:09.543110 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8v86\" (UniqueName: \"kubernetes.io/projected/60bcfb17-ac90-4bd5-a9bb-8a3c00700aba-kube-api-access-s8v86\") pod \"kube-proxy-5t4pq\" (UID: \"60bcfb17-ac90-4bd5-a9bb-8a3c00700aba\") " pod="kube-system/kube-proxy-5t4pq" Jul 9 09:30:09.543528 kubelet[2827]: I0709 09:30:09.543183 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60bcfb17-ac90-4bd5-a9bb-8a3c00700aba-kube-proxy\") pod \"kube-proxy-5t4pq\" (UID: \"60bcfb17-ac90-4bd5-a9bb-8a3c00700aba\") " pod="kube-system/kube-proxy-5t4pq" Jul 9 09:30:09.543528 kubelet[2827]: I0709 09:30:09.543229 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60bcfb17-ac90-4bd5-a9bb-8a3c00700aba-xtables-lock\") pod \"kube-proxy-5t4pq\" (UID: \"60bcfb17-ac90-4bd5-a9bb-8a3c00700aba\") " pod="kube-system/kube-proxy-5t4pq" Jul 9 09:30:09.663557 systemd[1]: Created slice kubepods-besteffort-pod43297bb8_4542_4c41_af08_7f0d5e93fec3.slice - libcontainer container kubepods-besteffort-pod43297bb8_4542_4c41_af08_7f0d5e93fec3.slice. Jul 9 09:30:09.745921 kubelet[2827]: I0709 09:30:09.745752 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gjj\" (UniqueName: \"kubernetes.io/projected/43297bb8-4542-4c41-af08-7f0d5e93fec3-kube-api-access-h2gjj\") pod \"tigera-operator-5bf8dfcb4-kpvrt\" (UID: \"43297bb8-4542-4c41-af08-7f0d5e93fec3\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kpvrt" Jul 9 09:30:09.745921 kubelet[2827]: I0709 09:30:09.745832 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43297bb8-4542-4c41-af08-7f0d5e93fec3-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-kpvrt\" (UID: \"43297bb8-4542-4c41-af08-7f0d5e93fec3\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kpvrt" Jul 9 09:30:09.835018 containerd[1547]: time="2025-07-09T09:30:09.834679388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5t4pq,Uid:60bcfb17-ac90-4bd5-a9bb-8a3c00700aba,Namespace:kube-system,Attempt:0,}" Jul 9 09:30:09.925954 containerd[1547]: time="2025-07-09T09:30:09.925830633Z" level=info msg="connecting to shim 69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25" address="unix:///run/containerd/s/94b954fd7eb98d51498e0b1a0060c1dc5a9164000f768db82d3cb62339757e86" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:09.971587 containerd[1547]: time="2025-07-09T09:30:09.971461815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kpvrt,Uid:43297bb8-4542-4c41-af08-7f0d5e93fec3,Namespace:tigera-operator,Attempt:0,}" Jul 9 09:30:09.971859 systemd[1]: Started cri-containerd-69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25.scope - libcontainer container 69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25. Jul 9 09:30:10.012881 containerd[1547]: time="2025-07-09T09:30:10.012790714Z" level=info msg="connecting to shim 4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06" address="unix:///run/containerd/s/da8557aad8a321d9ca18b13195b7f7242edcc9315cee89ae141bb05186001d03" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:10.022298 containerd[1547]: time="2025-07-09T09:30:10.022138202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5t4pq,Uid:60bcfb17-ac90-4bd5-a9bb-8a3c00700aba,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25\"" Jul 9 09:30:10.029574 containerd[1547]: time="2025-07-09T09:30:10.029191956Z" level=info msg="CreateContainer within sandbox \"69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 09:30:10.054773 containerd[1547]: time="2025-07-09T09:30:10.054709992Z" level=info msg="Container 3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:10.056813 systemd[1]: Started cri-containerd-4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06.scope - libcontainer container 4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06. Jul 9 09:30:10.072950 containerd[1547]: time="2025-07-09T09:30:10.072780818Z" level=info msg="CreateContainer within sandbox \"69f851b2b1b900d07f8e52f3586bfa93e53ffdf3de991ff5d0a350112ac74b25\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8\"" Jul 9 09:30:10.076648 containerd[1547]: time="2025-07-09T09:30:10.076584739Z" level=info msg="StartContainer for \"3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8\"" Jul 9 09:30:10.084006 containerd[1547]: time="2025-07-09T09:30:10.083920160Z" level=info msg="connecting to shim 3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8" address="unix:///run/containerd/s/94b954fd7eb98d51498e0b1a0060c1dc5a9164000f768db82d3cb62339757e86" protocol=ttrpc version=3 Jul 9 09:30:10.121845 systemd[1]: Started cri-containerd-3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8.scope - libcontainer container 3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8. Jul 9 09:30:10.131753 containerd[1547]: time="2025-07-09T09:30:10.131676745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kpvrt,Uid:43297bb8-4542-4c41-af08-7f0d5e93fec3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06\"" Jul 9 09:30:10.135696 containerd[1547]: time="2025-07-09T09:30:10.135655933Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 9 09:30:10.185987 containerd[1547]: time="2025-07-09T09:30:10.185877922Z" level=info msg="StartContainer for \"3f6313aeaf7d89dafb26dc0d5296b82446a05a453900b22054484ee6c186c0e8\" returns successfully" Jul 9 09:30:11.784121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3133883113.mount: Deactivated successfully. Jul 9 09:30:12.879670 containerd[1547]: time="2025-07-09T09:30:12.879251905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:12.883267 containerd[1547]: time="2025-07-09T09:30:12.883215595Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 9 09:30:12.884925 containerd[1547]: time="2025-07-09T09:30:12.884852861Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:12.889510 containerd[1547]: time="2025-07-09T09:30:12.888504357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:12.889510 containerd[1547]: time="2025-07-09T09:30:12.889326967Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.753434902s" Jul 9 09:30:12.889510 containerd[1547]: time="2025-07-09T09:30:12.889401246Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 9 09:30:12.893368 containerd[1547]: time="2025-07-09T09:30:12.893319111Z" level=info msg="CreateContainer within sandbox \"4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 9 09:30:12.910654 containerd[1547]: time="2025-07-09T09:30:12.909235497Z" level=info msg="Container 217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:12.923950 containerd[1547]: time="2025-07-09T09:30:12.923876075Z" level=info msg="CreateContainer within sandbox \"4154caf5f5caedc69d4e1a63904733c0fd3877eba5e0c264d0e3dcae3b11af06\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f\"" Jul 9 09:30:12.925278 containerd[1547]: time="2025-07-09T09:30:12.925152626Z" level=info msg="StartContainer for \"217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f\"" Jul 9 09:30:12.928375 containerd[1547]: time="2025-07-09T09:30:12.928341085Z" level=info msg="connecting to shim 217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f" address="unix:///run/containerd/s/da8557aad8a321d9ca18b13195b7f7242edcc9315cee89ae141bb05186001d03" protocol=ttrpc version=3 Jul 9 09:30:12.961838 systemd[1]: Started cri-containerd-217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f.scope - libcontainer container 217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f. Jul 9 09:30:13.012734 containerd[1547]: time="2025-07-09T09:30:13.012645560Z" level=info msg="StartContainer for \"217afc229ed5eaca0fc47147670bcb29d33ce8bb41e2f65b0b0fbf99476ec32f\" returns successfully" Jul 9 09:30:13.686958 kubelet[2827]: I0709 09:30:13.684526 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5t4pq" podStartSLOduration=4.684172812 podStartE2EDuration="4.684172812s" podCreationTimestamp="2025-07-09 09:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:30:10.647774698 +0000 UTC m=+6.319217767" watchObservedRunningTime="2025-07-09 09:30:13.684172812 +0000 UTC m=+9.355615901" Jul 9 09:30:13.702075 kubelet[2827]: I0709 09:30:13.692854 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-kpvrt" podStartSLOduration=1.9356786700000002 podStartE2EDuration="4.692823537s" podCreationTimestamp="2025-07-09 09:30:09 +0000 UTC" firstStartedPulling="2025-07-09 09:30:10.133837509 +0000 UTC m=+5.805280558" lastFinishedPulling="2025-07-09 09:30:12.890982386 +0000 UTC m=+8.562425425" observedRunningTime="2025-07-09 09:30:13.690381325 +0000 UTC m=+9.361824414" watchObservedRunningTime="2025-07-09 09:30:13.692823537 +0000 UTC m=+9.364266696" Jul 9 09:30:19.957005 sudo[1851]: pam_unix(sudo:session): session closed for user root Jul 9 09:30:20.237776 sshd[1850]: Connection closed by 172.24.4.1 port 52930 Jul 9 09:30:20.238554 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jul 9 09:30:20.249455 systemd[1]: sshd@8-172.24.4.7:22-172.24.4.1:52930.service: Deactivated successfully. Jul 9 09:30:20.254965 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 09:30:20.255545 systemd[1]: session-11.scope: Consumed 7.418s CPU time, 227.6M memory peak. Jul 9 09:30:20.261876 systemd-logind[1521]: Session 11 logged out. Waiting for processes to exit. Jul 9 09:30:20.264441 systemd-logind[1521]: Removed session 11. Jul 9 09:30:24.695331 systemd[1]: Created slice kubepods-besteffort-pod9f4e0283_e2c6_42e7_8771_6b643f89a262.slice - libcontainer container kubepods-besteffort-pod9f4e0283_e2c6_42e7_8771_6b643f89a262.slice. Jul 9 09:30:24.705149 kubelet[2827]: W0709 09:30:24.704779 2827 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4386-0-0-w-15e87cee3a.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4386-0-0-w-15e87cee3a.novalocal' and this object Jul 9 09:30:24.705149 kubelet[2827]: E0709 09:30:24.705010 2827 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4386-0-0-w-15e87cee3a.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4386-0-0-w-15e87cee3a.novalocal' and this object" logger="UnhandledError" Jul 9 09:30:24.761069 kubelet[2827]: I0709 09:30:24.760363 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f4e0283-e2c6-42e7-8771-6b643f89a262-typha-certs\") pod \"calico-typha-566f7d98c4-68zls\" (UID: \"9f4e0283-e2c6-42e7-8771-6b643f89a262\") " pod="calico-system/calico-typha-566f7d98c4-68zls" Jul 9 09:30:24.761426 kubelet[2827]: I0709 09:30:24.761375 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f4e0283-e2c6-42e7-8771-6b643f89a262-tigera-ca-bundle\") pod \"calico-typha-566f7d98c4-68zls\" (UID: \"9f4e0283-e2c6-42e7-8771-6b643f89a262\") " pod="calico-system/calico-typha-566f7d98c4-68zls" Jul 9 09:30:24.761607 kubelet[2827]: I0709 09:30:24.761580 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmwbr\" (UniqueName: \"kubernetes.io/projected/9f4e0283-e2c6-42e7-8771-6b643f89a262-kube-api-access-pmwbr\") pod \"calico-typha-566f7d98c4-68zls\" (UID: \"9f4e0283-e2c6-42e7-8771-6b643f89a262\") " pod="calico-system/calico-typha-566f7d98c4-68zls" Jul 9 09:30:24.916689 systemd[1]: Created slice kubepods-besteffort-pod0e31df50_115d_4a56_98ca_59725c0bf0ab.slice - libcontainer container kubepods-besteffort-pod0e31df50_115d_4a56_98ca_59725c0bf0ab.slice. Jul 9 09:30:24.964465 kubelet[2827]: I0709 09:30:24.963995 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-policysync\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.964465 kubelet[2827]: I0709 09:30:24.964044 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e31df50-115d-4a56-98ca-59725c0bf0ab-tigera-ca-bundle\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.964465 kubelet[2827]: I0709 09:30:24.964066 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0e31df50-115d-4a56-98ca-59725c0bf0ab-node-certs\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.964465 kubelet[2827]: I0709 09:30:24.964112 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-xtables-lock\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.964465 kubelet[2827]: I0709 09:30:24.964138 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-lib-modules\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.965484 kubelet[2827]: I0709 09:30:24.964161 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z6vx\" (UniqueName: \"kubernetes.io/projected/0e31df50-115d-4a56-98ca-59725c0bf0ab-kube-api-access-5z6vx\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.965484 kubelet[2827]: I0709 09:30:24.964187 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-cni-net-dir\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.965484 kubelet[2827]: I0709 09:30:24.964207 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-flexvol-driver-host\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.965484 kubelet[2827]: I0709 09:30:24.964286 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-var-lib-calico\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.965484 kubelet[2827]: I0709 09:30:24.964312 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-cni-log-dir\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.966023 kubelet[2827]: I0709 09:30:24.964332 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-cni-bin-dir\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:24.966023 kubelet[2827]: I0709 09:30:24.964350 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0e31df50-115d-4a56-98ca-59725c0bf0ab-var-run-calico\") pod \"calico-node-fldpr\" (UID: \"0e31df50-115d-4a56-98ca-59725c0bf0ab\") " pod="calico-system/calico-node-fldpr" Jul 9 09:30:25.073265 kubelet[2827]: E0709 09:30:25.073215 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.073265 kubelet[2827]: W0709 09:30:25.073253 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.073463 kubelet[2827]: E0709 09:30:25.073320 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.075823 kubelet[2827]: E0709 09:30:25.075789 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.075940 kubelet[2827]: W0709 09:30:25.075846 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.075985 kubelet[2827]: E0709 09:30:25.075938 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.079714 kubelet[2827]: E0709 09:30:25.077744 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.079714 kubelet[2827]: W0709 09:30:25.077776 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.079714 kubelet[2827]: E0709 09:30:25.077832 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.080145 kubelet[2827]: E0709 09:30:25.080118 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.081021 kubelet[2827]: W0709 09:30:25.080559 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.081021 kubelet[2827]: E0709 09:30:25.080650 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.081829 kubelet[2827]: E0709 09:30:25.081804 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.081829 kubelet[2827]: W0709 09:30:25.081882 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.082380 kubelet[2827]: E0709 09:30:25.082274 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.083520 kubelet[2827]: E0709 09:30:25.083500 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.083655 kubelet[2827]: W0709 09:30:25.083618 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.083776 kubelet[2827]: E0709 09:30:25.083743 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.084330 kubelet[2827]: E0709 09:30:25.084076 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.084330 kubelet[2827]: W0709 09:30:25.084090 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.084330 kubelet[2827]: E0709 09:30:25.084131 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.085395 kubelet[2827]: E0709 09:30:25.085213 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.085395 kubelet[2827]: W0709 09:30:25.085225 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.085395 kubelet[2827]: E0709 09:30:25.085288 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.085560 kubelet[2827]: E0709 09:30:25.085545 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.085815 kubelet[2827]: W0709 09:30:25.085799 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.086131 kubelet[2827]: E0709 09:30:25.085995 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.086974 kubelet[2827]: E0709 09:30:25.086924 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.087318 kubelet[2827]: W0709 09:30:25.087065 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.087318 kubelet[2827]: E0709 09:30:25.087108 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.088085 kubelet[2827]: E0709 09:30:25.087790 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.088085 kubelet[2827]: W0709 09:30:25.087886 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.088085 kubelet[2827]: E0709 09:30:25.087921 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.088552 kubelet[2827]: E0709 09:30:25.088528 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.089017 kubelet[2827]: W0709 09:30:25.089000 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.089195 kubelet[2827]: E0709 09:30:25.089152 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.089535 kubelet[2827]: E0709 09:30:25.089418 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.089738 kubelet[2827]: W0709 09:30:25.089603 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.090064 kubelet[2827]: E0709 09:30:25.089891 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.090347 kubelet[2827]: E0709 09:30:25.090300 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.090987 kubelet[2827]: W0709 09:30:25.090577 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.090987 kubelet[2827]: E0709 09:30:25.090667 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.091297 kubelet[2827]: E0709 09:30:25.091282 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.091450 kubelet[2827]: W0709 09:30:25.091415 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.091798 kubelet[2827]: E0709 09:30:25.091708 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.092245 kubelet[2827]: E0709 09:30:25.092193 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.092772 kubelet[2827]: W0709 09:30:25.092571 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.092772 kubelet[2827]: E0709 09:30:25.092642 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.093366 kubelet[2827]: E0709 09:30:25.093262 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.093621 kubelet[2827]: W0709 09:30:25.093441 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.093621 kubelet[2827]: E0709 09:30:25.093467 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.114657 kubelet[2827]: E0709 09:30:25.114544 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.114657 kubelet[2827]: W0709 09:30:25.114573 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.114657 kubelet[2827]: E0709 09:30:25.114601 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.126659 kubelet[2827]: E0709 09:30:25.125577 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:25.149587 kubelet[2827]: E0709 09:30:25.149524 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.149587 kubelet[2827]: W0709 09:30:25.149555 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.149587 kubelet[2827]: E0709 09:30:25.149577 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.150968 kubelet[2827]: E0709 09:30:25.150903 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.150968 kubelet[2827]: W0709 09:30:25.150938 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.150968 kubelet[2827]: E0709 09:30:25.150953 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.151235 kubelet[2827]: E0709 09:30:25.151153 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.151235 kubelet[2827]: W0709 09:30:25.151188 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.151235 kubelet[2827]: E0709 09:30:25.151199 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.151521 kubelet[2827]: E0709 09:30:25.151385 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.151521 kubelet[2827]: W0709 09:30:25.151395 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.151521 kubelet[2827]: E0709 09:30:25.151405 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.151734 kubelet[2827]: E0709 09:30:25.151674 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.151734 kubelet[2827]: W0709 09:30:25.151691 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.151734 kubelet[2827]: E0709 09:30:25.151702 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.152283 kubelet[2827]: E0709 09:30:25.152233 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.152330 kubelet[2827]: W0709 09:30:25.152300 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.152330 kubelet[2827]: E0709 09:30:25.152314 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.152549 kubelet[2827]: E0709 09:30:25.152494 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.152549 kubelet[2827]: W0709 09:30:25.152504 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.152549 kubelet[2827]: E0709 09:30:25.152516 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.152912 kubelet[2827]: E0709 09:30:25.152869 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.152912 kubelet[2827]: W0709 09:30:25.152885 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.152912 kubelet[2827]: E0709 09:30:25.152896 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.153310 kubelet[2827]: E0709 09:30:25.153244 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.153310 kubelet[2827]: W0709 09:30:25.153310 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.153751 kubelet[2827]: E0709 09:30:25.153334 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.153751 kubelet[2827]: E0709 09:30:25.153749 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.153830 kubelet[2827]: W0709 09:30:25.153760 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.153830 kubelet[2827]: E0709 09:30:25.153791 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.154050 kubelet[2827]: E0709 09:30:25.153950 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.154050 kubelet[2827]: W0709 09:30:25.153965 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.154050 kubelet[2827]: E0709 09:30:25.153975 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.154663 kubelet[2827]: E0709 09:30:25.154497 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.154663 kubelet[2827]: W0709 09:30:25.154512 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.154663 kubelet[2827]: E0709 09:30:25.154522 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.155571 kubelet[2827]: E0709 09:30:25.155090 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.155571 kubelet[2827]: W0709 09:30:25.155101 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.155571 kubelet[2827]: E0709 09:30:25.155148 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.155747 kubelet[2827]: E0709 09:30:25.155708 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.155747 kubelet[2827]: W0709 09:30:25.155720 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.155747 kubelet[2827]: E0709 09:30:25.155731 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.156379 kubelet[2827]: E0709 09:30:25.155940 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.156379 kubelet[2827]: W0709 09:30:25.155956 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.156379 kubelet[2827]: E0709 09:30:25.155967 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.156379 kubelet[2827]: E0709 09:30:25.156267 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.156379 kubelet[2827]: W0709 09:30:25.156278 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.156379 kubelet[2827]: E0709 09:30:25.156289 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.156709 kubelet[2827]: E0709 09:30:25.156593 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.156709 kubelet[2827]: W0709 09:30:25.156605 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.156709 kubelet[2827]: E0709 09:30:25.156616 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.157455 kubelet[2827]: E0709 09:30:25.157112 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.157455 kubelet[2827]: W0709 09:30:25.157130 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.157455 kubelet[2827]: E0709 09:30:25.157140 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.157455 kubelet[2827]: E0709 09:30:25.157305 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.157455 kubelet[2827]: W0709 09:30:25.157315 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.157455 kubelet[2827]: E0709 09:30:25.157333 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.158173 kubelet[2827]: E0709 09:30:25.158132 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.158173 kubelet[2827]: W0709 09:30:25.158150 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.158173 kubelet[2827]: E0709 09:30:25.158161 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.166111 kubelet[2827]: E0709 09:30:25.166065 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.166111 kubelet[2827]: W0709 09:30:25.166087 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.166111 kubelet[2827]: E0709 09:30:25.166103 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.166388 kubelet[2827]: I0709 09:30:25.166132 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/afe07b1f-0a7b-4bcd-a38a-6e433e4d698a-varrun\") pod \"csi-node-driver-d74jx\" (UID: \"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a\") " pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:25.166388 kubelet[2827]: E0709 09:30:25.166354 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.166388 kubelet[2827]: W0709 09:30:25.166366 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.166526 kubelet[2827]: E0709 09:30:25.166390 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.166526 kubelet[2827]: I0709 09:30:25.166410 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afe07b1f-0a7b-4bcd-a38a-6e433e4d698a-kubelet-dir\") pod \"csi-node-driver-d74jx\" (UID: \"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a\") " pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:25.166601 kubelet[2827]: E0709 09:30:25.166591 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.166662 kubelet[2827]: W0709 09:30:25.166603 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.166770 kubelet[2827]: E0709 09:30:25.166711 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.166827 kubelet[2827]: E0709 09:30:25.166793 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.166827 kubelet[2827]: W0709 09:30:25.166802 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.166827 kubelet[2827]: E0709 09:30:25.166812 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.167053 kubelet[2827]: I0709 09:30:25.166741 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/afe07b1f-0a7b-4bcd-a38a-6e433e4d698a-socket-dir\") pod \"csi-node-driver-d74jx\" (UID: \"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a\") " pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:25.167053 kubelet[2827]: E0709 09:30:25.167017 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.167053 kubelet[2827]: W0709 09:30:25.167026 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.167053 kubelet[2827]: E0709 09:30:25.167049 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.167378 kubelet[2827]: E0709 09:30:25.167346 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.167378 kubelet[2827]: W0709 09:30:25.167365 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.167378 kubelet[2827]: E0709 09:30:25.167379 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.167741 kubelet[2827]: E0709 09:30:25.167710 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.167741 kubelet[2827]: W0709 09:30:25.167728 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.167844 kubelet[2827]: E0709 09:30:25.167819 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.168025 kubelet[2827]: E0709 09:30:25.168005 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.168025 kubelet[2827]: W0709 09:30:25.168021 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.168130 kubelet[2827]: E0709 09:30:25.168108 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.168293 kubelet[2827]: E0709 09:30:25.168273 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.168293 kubelet[2827]: W0709 09:30:25.168288 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.168412 kubelet[2827]: E0709 09:30:25.168312 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.168540 kubelet[2827]: E0709 09:30:25.168484 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.168540 kubelet[2827]: W0709 09:30:25.168500 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.168540 kubelet[2827]: E0709 09:30:25.168510 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.168745 kubelet[2827]: E0709 09:30:25.168724 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.168745 kubelet[2827]: W0709 09:30:25.168740 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.168854 kubelet[2827]: E0709 09:30:25.168750 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.168854 kubelet[2827]: I0709 09:30:25.168783 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/afe07b1f-0a7b-4bcd-a38a-6e433e4d698a-registration-dir\") pod \"csi-node-driver-d74jx\" (UID: \"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a\") " pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:25.168980 kubelet[2827]: E0709 09:30:25.168959 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.168980 kubelet[2827]: W0709 09:30:25.168976 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.169085 kubelet[2827]: E0709 09:30:25.168990 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.169085 kubelet[2827]: I0709 09:30:25.169008 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6cx\" (UniqueName: \"kubernetes.io/projected/afe07b1f-0a7b-4bcd-a38a-6e433e4d698a-kube-api-access-wv6cx\") pod \"csi-node-driver-d74jx\" (UID: \"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a\") " pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:25.169406 kubelet[2827]: E0709 09:30:25.169381 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.169406 kubelet[2827]: W0709 09:30:25.169399 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.169512 kubelet[2827]: E0709 09:30:25.169424 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.169652 kubelet[2827]: E0709 09:30:25.169594 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.169652 kubelet[2827]: W0709 09:30:25.169610 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.169738 kubelet[2827]: E0709 09:30:25.169670 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.169904 kubelet[2827]: E0709 09:30:25.169877 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.169904 kubelet[2827]: W0709 09:30:25.169895 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.170027 kubelet[2827]: E0709 09:30:25.169906 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.170123 kubelet[2827]: E0709 09:30:25.170087 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.170123 kubelet[2827]: W0709 09:30:25.170098 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.170123 kubelet[2827]: E0709 09:30:25.170108 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.229168 containerd[1547]: time="2025-07-09T09:30:25.227134645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fldpr,Uid:0e31df50-115d-4a56-98ca-59725c0bf0ab,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:25.272843 kubelet[2827]: E0709 09:30:25.272158 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.272843 kubelet[2827]: W0709 09:30:25.272727 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.272843 kubelet[2827]: E0709 09:30:25.272756 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.273754 kubelet[2827]: E0709 09:30:25.273658 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.273754 kubelet[2827]: W0709 09:30:25.273672 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.273754 kubelet[2827]: E0709 09:30:25.273699 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.274177 kubelet[2827]: E0709 09:30:25.274138 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.274177 kubelet[2827]: W0709 09:30:25.274155 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.274324 kubelet[2827]: E0709 09:30:25.274299 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.275215 kubelet[2827]: E0709 09:30:25.275186 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.275215 kubelet[2827]: W0709 09:30:25.275202 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.275489 kubelet[2827]: E0709 09:30:25.275459 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.276125 kubelet[2827]: E0709 09:30:25.276018 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.276125 kubelet[2827]: W0709 09:30:25.276034 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.276218 kubelet[2827]: E0709 09:30:25.276203 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.277202 kubelet[2827]: E0709 09:30:25.276527 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.277202 kubelet[2827]: W0709 09:30:25.276538 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.277202 kubelet[2827]: E0709 09:30:25.276922 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.277202 kubelet[2827]: E0709 09:30:25.277071 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.277202 kubelet[2827]: W0709 09:30:25.277082 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.277202 kubelet[2827]: E0709 09:30:25.277136 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.277651 kubelet[2827]: E0709 09:30:25.277458 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.277651 kubelet[2827]: W0709 09:30:25.277475 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.278648 kubelet[2827]: E0709 09:30:25.277879 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.278648 kubelet[2827]: E0709 09:30:25.278128 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.278648 kubelet[2827]: W0709 09:30:25.278139 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.278648 kubelet[2827]: E0709 09:30:25.278274 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.278824 kubelet[2827]: E0709 09:30:25.278799 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.278824 kubelet[2827]: W0709 09:30:25.278818 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.278952 kubelet[2827]: E0709 09:30:25.278926 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.279251 kubelet[2827]: E0709 09:30:25.279181 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.279251 kubelet[2827]: W0709 09:30:25.279197 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.280331 kubelet[2827]: E0709 09:30:25.279605 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.280331 kubelet[2827]: E0709 09:30:25.279843 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.280331 kubelet[2827]: W0709 09:30:25.279854 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.280495 kubelet[2827]: E0709 09:30:25.280363 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.282048 kubelet[2827]: E0709 09:30:25.280588 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.282048 kubelet[2827]: W0709 09:30:25.280606 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.282048 kubelet[2827]: E0709 09:30:25.280673 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.282048 kubelet[2827]: E0709 09:30:25.281761 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.282048 kubelet[2827]: W0709 09:30:25.281774 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.282048 kubelet[2827]: E0709 09:30:25.281840 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.282048 kubelet[2827]: E0709 09:30:25.282034 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.282048 kubelet[2827]: W0709 09:30:25.282045 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.282921 kubelet[2827]: E0709 09:30:25.282209 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.282921 kubelet[2827]: E0709 09:30:25.282297 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.282921 kubelet[2827]: W0709 09:30:25.282307 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.282921 kubelet[2827]: E0709 09:30:25.282429 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.282921 kubelet[2827]: E0709 09:30:25.282524 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.282921 kubelet[2827]: W0709 09:30:25.282537 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.283698 kubelet[2827]: E0709 09:30:25.283669 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.284935 kubelet[2827]: E0709 09:30:25.284909 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.284935 kubelet[2827]: W0709 09:30:25.284927 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.285160 kubelet[2827]: E0709 09:30:25.285134 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.285196 kubelet[2827]: E0709 09:30:25.285186 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.285196 kubelet[2827]: W0709 09:30:25.285195 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.285307 kubelet[2827]: E0709 09:30:25.285284 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.285481 kubelet[2827]: E0709 09:30:25.285444 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.285481 kubelet[2827]: W0709 09:30:25.285459 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.285597 kubelet[2827]: E0709 09:30:25.285545 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.286797 kubelet[2827]: E0709 09:30:25.286757 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.286797 kubelet[2827]: W0709 09:30:25.286777 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.286930 kubelet[2827]: E0709 09:30:25.286896 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.287200 kubelet[2827]: E0709 09:30:25.287160 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.287200 kubelet[2827]: W0709 09:30:25.287195 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.287455 kubelet[2827]: E0709 09:30:25.287247 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.287603 kubelet[2827]: E0709 09:30:25.287478 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.287603 kubelet[2827]: W0709 09:30:25.287489 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.287936 kubelet[2827]: E0709 09:30:25.287912 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.288969 containerd[1547]: time="2025-07-09T09:30:25.288918993Z" level=info msg="connecting to shim e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c" address="unix:///run/containerd/s/00975754f33d24a6fc46f482ca20b018cc4ae85b016ab9f27ebf9ddb903ac8f4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:25.289180 kubelet[2827]: E0709 09:30:25.289041 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.289180 kubelet[2827]: W0709 09:30:25.289052 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.289180 kubelet[2827]: E0709 09:30:25.289069 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.289557 kubelet[2827]: E0709 09:30:25.289502 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.289557 kubelet[2827]: W0709 09:30:25.289519 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.289557 kubelet[2827]: E0709 09:30:25.289531 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.291423 kubelet[2827]: E0709 09:30:25.291176 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.291423 kubelet[2827]: W0709 09:30:25.291312 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.291423 kubelet[2827]: E0709 09:30:25.291346 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.319971 kubelet[2827]: E0709 09:30:25.319551 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.319971 kubelet[2827]: W0709 09:30:25.319579 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.320337 kubelet[2827]: E0709 09:30:25.320239 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.344197 systemd[1]: Started cri-containerd-e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c.scope - libcontainer container e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c. Jul 9 09:30:25.381952 kubelet[2827]: E0709 09:30:25.381916 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.381952 kubelet[2827]: W0709 09:30:25.381942 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.382173 kubelet[2827]: E0709 09:30:25.381973 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.389882 containerd[1547]: time="2025-07-09T09:30:25.389776357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fldpr,Uid:0e31df50-115d-4a56-98ca-59725c0bf0ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\"" Jul 9 09:30:25.395304 containerd[1547]: time="2025-07-09T09:30:25.394929479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 9 09:30:25.483660 kubelet[2827]: E0709 09:30:25.483079 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.483660 kubelet[2827]: W0709 09:30:25.483102 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.483660 kubelet[2827]: E0709 09:30:25.483122 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.585779 kubelet[2827]: E0709 09:30:25.585603 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.585779 kubelet[2827]: W0709 09:30:25.585710 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.585779 kubelet[2827]: E0709 09:30:25.585747 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.687786 kubelet[2827]: E0709 09:30:25.687538 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.688903 kubelet[2827]: W0709 09:30:25.688780 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.688903 kubelet[2827]: E0709 09:30:25.688891 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.791486 kubelet[2827]: E0709 09:30:25.791110 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.791486 kubelet[2827]: W0709 09:30:25.791161 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.791486 kubelet[2827]: E0709 09:30:25.791211 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.864272 kubelet[2827]: E0709 09:30:25.863989 2827 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 9 09:30:25.864272 kubelet[2827]: E0709 09:30:25.864295 2827 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f4e0283-e2c6-42e7-8771-6b643f89a262-typha-certs podName:9f4e0283-e2c6-42e7-8771-6b643f89a262 nodeName:}" failed. No retries permitted until 2025-07-09 09:30:26.36418953 +0000 UTC m=+22.035632620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/9f4e0283-e2c6-42e7-8771-6b643f89a262-typha-certs") pod "calico-typha-566f7d98c4-68zls" (UID: "9f4e0283-e2c6-42e7-8771-6b643f89a262") : failed to sync secret cache: timed out waiting for the condition Jul 9 09:30:25.893699 kubelet[2827]: E0709 09:30:25.893489 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.893699 kubelet[2827]: W0709 09:30:25.893592 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.896952 kubelet[2827]: E0709 09:30:25.894147 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:25.998408 kubelet[2827]: E0709 09:30:25.998203 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:25.998408 kubelet[2827]: W0709 09:30:25.998259 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:25.998408 kubelet[2827]: E0709 09:30:25.998305 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.100053 kubelet[2827]: E0709 09:30:26.099803 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.100053 kubelet[2827]: W0709 09:30:26.099856 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.100053 kubelet[2827]: E0709 09:30:26.099898 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.204070 kubelet[2827]: E0709 09:30:26.203914 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.204070 kubelet[2827]: W0709 09:30:26.203981 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.205930 kubelet[2827]: E0709 09:30:26.204333 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.306584 kubelet[2827]: E0709 09:30:26.306383 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.306584 kubelet[2827]: W0709 09:30:26.306435 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.306584 kubelet[2827]: E0709 09:30:26.306482 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.409022 kubelet[2827]: E0709 09:30:26.408743 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.409022 kubelet[2827]: W0709 09:30:26.408840 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.409022 kubelet[2827]: E0709 09:30:26.408943 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.411123 kubelet[2827]: E0709 09:30:26.411046 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.411123 kubelet[2827]: W0709 09:30:26.411088 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.411123 kubelet[2827]: E0709 09:30:26.411117 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.411803 kubelet[2827]: E0709 09:30:26.411692 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.411803 kubelet[2827]: W0709 09:30:26.411733 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.411803 kubelet[2827]: E0709 09:30:26.411800 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.412594 kubelet[2827]: E0709 09:30:26.412287 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.412594 kubelet[2827]: W0709 09:30:26.412316 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.412594 kubelet[2827]: E0709 09:30:26.412388 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.413283 kubelet[2827]: E0709 09:30:26.413212 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.413283 kubelet[2827]: W0709 09:30:26.413256 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.413283 kubelet[2827]: E0709 09:30:26.413283 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.429720 kubelet[2827]: E0709 09:30:26.429608 2827 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 9 09:30:26.429720 kubelet[2827]: W0709 09:30:26.429704 2827 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 9 09:30:26.430042 kubelet[2827]: E0709 09:30:26.429749 2827 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 9 09:30:26.502222 containerd[1547]: time="2025-07-09T09:30:26.502103997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566f7d98c4-68zls,Uid:9f4e0283-e2c6-42e7-8771-6b643f89a262,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:26.549794 kubelet[2827]: E0709 09:30:26.549127 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:26.572354 containerd[1547]: time="2025-07-09T09:30:26.572287320Z" level=info msg="connecting to shim f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882" address="unix:///run/containerd/s/4444d0ae307fd528d8c5b4b0f64aca607fa933847cb2e3a793c6acf530cefecd" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:26.616813 systemd[1]: Started cri-containerd-f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882.scope - libcontainer container f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882. Jul 9 09:30:26.687305 containerd[1547]: time="2025-07-09T09:30:26.687157288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566f7d98c4-68zls,Uid:9f4e0283-e2c6-42e7-8771-6b643f89a262,Namespace:calico-system,Attempt:0,} returns sandbox id \"f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882\"" Jul 9 09:30:27.460933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084966326.mount: Deactivated successfully. Jul 9 09:30:27.605291 containerd[1547]: time="2025-07-09T09:30:27.605162899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:27.606331 containerd[1547]: time="2025-07-09T09:30:27.606059118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 9 09:30:27.607649 containerd[1547]: time="2025-07-09T09:30:27.607577331Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:27.610721 containerd[1547]: time="2025-07-09T09:30:27.610684682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:27.611834 containerd[1547]: time="2025-07-09T09:30:27.611794962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.216818425s" Jul 9 09:30:27.612008 containerd[1547]: time="2025-07-09T09:30:27.611945354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 9 09:30:27.613401 containerd[1547]: time="2025-07-09T09:30:27.613367989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 9 09:30:27.617720 containerd[1547]: time="2025-07-09T09:30:27.617583555Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 9 09:30:27.640099 containerd[1547]: time="2025-07-09T09:30:27.640036414Z" level=info msg="Container 80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:27.646203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254149182.mount: Deactivated successfully. Jul 9 09:30:27.658160 containerd[1547]: time="2025-07-09T09:30:27.658117814Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\"" Jul 9 09:30:27.660735 containerd[1547]: time="2025-07-09T09:30:27.660676116Z" level=info msg="StartContainer for \"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\"" Jul 9 09:30:27.662803 containerd[1547]: time="2025-07-09T09:30:27.662768175Z" level=info msg="connecting to shim 80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b" address="unix:///run/containerd/s/00975754f33d24a6fc46f482ca20b018cc4ae85b016ab9f27ebf9ddb903ac8f4" protocol=ttrpc version=3 Jul 9 09:30:27.699844 systemd[1]: Started cri-containerd-80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b.scope - libcontainer container 80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b. Jul 9 09:30:27.765478 containerd[1547]: time="2025-07-09T09:30:27.765401609Z" level=info msg="StartContainer for \"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\" returns successfully" Jul 9 09:30:27.771736 systemd[1]: cri-containerd-80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b.scope: Deactivated successfully. Jul 9 09:30:27.779536 containerd[1547]: time="2025-07-09T09:30:27.779462097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\" id:\"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\" pid:3452 exited_at:{seconds:1752053427 nanos:777678586}" Jul 9 09:30:27.780094 containerd[1547]: time="2025-07-09T09:30:27.780041892Z" level=info msg="received exit event container_id:\"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\" id:\"80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b\" pid:3452 exited_at:{seconds:1752053427 nanos:777678586}" Jul 9 09:30:28.406251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80966f80b1752b21818d4f4739a3585f31028ba57fa9c8d024e0a79b145f4c6b-rootfs.mount: Deactivated successfully. Jul 9 09:30:28.540248 kubelet[2827]: E0709 09:30:28.540178 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:30.542218 kubelet[2827]: E0709 09:30:30.542133 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:31.522660 containerd[1547]: time="2025-07-09T09:30:31.522513432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:31.525689 containerd[1547]: time="2025-07-09T09:30:31.525649829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 9 09:30:31.527475 containerd[1547]: time="2025-07-09T09:30:31.527404666Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:31.530510 containerd[1547]: time="2025-07-09T09:30:31.530458988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:31.531654 containerd[1547]: time="2025-07-09T09:30:31.531098966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.917605021s" Jul 9 09:30:31.531654 containerd[1547]: time="2025-07-09T09:30:31.531156074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 9 09:30:31.533932 containerd[1547]: time="2025-07-09T09:30:31.533899773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 9 09:30:31.556658 containerd[1547]: time="2025-07-09T09:30:31.556228004Z" level=info msg="CreateContainer within sandbox \"f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 9 09:30:31.570748 containerd[1547]: time="2025-07-09T09:30:31.570706328Z" level=info msg="Container e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:31.586666 containerd[1547]: time="2025-07-09T09:30:31.586606376Z" level=info msg="CreateContainer within sandbox \"f66395a3fbd0e8db49ce65f0ef7e872cff281cb4f562ef3764992939b4487882\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982\"" Jul 9 09:30:31.588905 containerd[1547]: time="2025-07-09T09:30:31.588816115Z" level=info msg="StartContainer for \"e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982\"" Jul 9 09:30:31.591595 containerd[1547]: time="2025-07-09T09:30:31.591551700Z" level=info msg="connecting to shim e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982" address="unix:///run/containerd/s/4444d0ae307fd528d8c5b4b0f64aca607fa933847cb2e3a793c6acf530cefecd" protocol=ttrpc version=3 Jul 9 09:30:31.620804 systemd[1]: Started cri-containerd-e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982.scope - libcontainer container e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982. Jul 9 09:30:31.688031 containerd[1547]: time="2025-07-09T09:30:31.687970113Z" level=info msg="StartContainer for \"e7e307289f97bd178cccfde8f5851ccbb985dc667efd62e257df1bf552da6982\" returns successfully" Jul 9 09:30:31.795765 kubelet[2827]: I0709 09:30:31.795532 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-566f7d98c4-68zls" podStartSLOduration=2.954137092 podStartE2EDuration="7.795493379s" podCreationTimestamp="2025-07-09 09:30:24 +0000 UTC" firstStartedPulling="2025-07-09 09:30:26.691709885 +0000 UTC m=+22.363152934" lastFinishedPulling="2025-07-09 09:30:31.533066182 +0000 UTC m=+27.204509221" observedRunningTime="2025-07-09 09:30:31.793292266 +0000 UTC m=+27.464735315" watchObservedRunningTime="2025-07-09 09:30:31.795493379 +0000 UTC m=+27.466936418" Jul 9 09:30:32.541167 kubelet[2827]: E0709 09:30:32.541011 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:34.546304 kubelet[2827]: E0709 09:30:34.544478 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:36.541255 kubelet[2827]: E0709 09:30:36.541175 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:36.745570 containerd[1547]: time="2025-07-09T09:30:36.745517756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:36.747817 containerd[1547]: time="2025-07-09T09:30:36.747749688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 9 09:30:36.749667 containerd[1547]: time="2025-07-09T09:30:36.749525285Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:36.752180 containerd[1547]: time="2025-07-09T09:30:36.752133381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:36.753341 containerd[1547]: time="2025-07-09T09:30:36.753110742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.219170113s" Jul 9 09:30:36.753341 containerd[1547]: time="2025-07-09T09:30:36.753152511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 9 09:30:36.757262 containerd[1547]: time="2025-07-09T09:30:36.757213640Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 9 09:30:36.773552 containerd[1547]: time="2025-07-09T09:30:36.773134882Z" level=info msg="Container 21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:36.792896 containerd[1547]: time="2025-07-09T09:30:36.792706112Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\"" Jul 9 09:30:36.793938 containerd[1547]: time="2025-07-09T09:30:36.793799329Z" level=info msg="StartContainer for \"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\"" Jul 9 09:30:36.796610 containerd[1547]: time="2025-07-09T09:30:36.796549943Z" level=info msg="connecting to shim 21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910" address="unix:///run/containerd/s/00975754f33d24a6fc46f482ca20b018cc4ae85b016ab9f27ebf9ddb903ac8f4" protocol=ttrpc version=3 Jul 9 09:30:36.825968 systemd[1]: Started cri-containerd-21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910.scope - libcontainer container 21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910. Jul 9 09:30:36.887090 containerd[1547]: time="2025-07-09T09:30:36.886937980Z" level=info msg="StartContainer for \"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\" returns successfully" Jul 9 09:30:38.541273 kubelet[2827]: E0709 09:30:38.540540 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:39.052342 systemd[1]: cri-containerd-21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910.scope: Deactivated successfully. Jul 9 09:30:39.053568 systemd[1]: cri-containerd-21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910.scope: Consumed 1.043s CPU time, 192.2M memory peak, 171.2M written to disk. Jul 9 09:30:39.058967 containerd[1547]: time="2025-07-09T09:30:39.058541786Z" level=info msg="received exit event container_id:\"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\" id:\"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\" pid:3555 exited_at:{seconds:1752053439 nanos:57951731}" Jul 9 09:30:39.059858 containerd[1547]: time="2025-07-09T09:30:39.059581214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\" id:\"21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910\" pid:3555 exited_at:{seconds:1752053439 nanos:57951731}" Jul 9 09:30:39.097660 kubelet[2827]: I0709 09:30:39.096233 2827 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 09:30:39.101875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d770b254a5e2c2b87467da5ee1ef38d4d2bd5ddaafbc402bfb260d6ff1f910-rootfs.mount: Deactivated successfully. Jul 9 09:30:39.709268 systemd[1]: Created slice kubepods-burstable-pod82ad1ad0_e4e6_41c2_8105_2c99716c1c36.slice - libcontainer container kubepods-burstable-pod82ad1ad0_e4e6_41c2_8105_2c99716c1c36.slice. Jul 9 09:30:39.821848 kubelet[2827]: I0709 09:30:39.821757 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82ad1ad0-e4e6-41c2-8105-2c99716c1c36-config-volume\") pod \"coredns-7c65d6cfc9-d256k\" (UID: \"82ad1ad0-e4e6-41c2-8105-2c99716c1c36\") " pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:39.822977 kubelet[2827]: I0709 09:30:39.822447 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42w7\" (UniqueName: \"kubernetes.io/projected/82ad1ad0-e4e6-41c2-8105-2c99716c1c36-kube-api-access-s42w7\") pod \"coredns-7c65d6cfc9-d256k\" (UID: \"82ad1ad0-e4e6-41c2-8105-2c99716c1c36\") " pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:39.976280 systemd[1]: Created slice kubepods-burstable-pod946fed98_1138_4414_9da3_e7f45e864152.slice - libcontainer container kubepods-burstable-pod946fed98_1138_4414_9da3_e7f45e864152.slice. Jul 9 09:30:40.003793 systemd[1]: Created slice kubepods-besteffort-podc9dcd67d_8d2e_47ce_a04c_7e258ad570df.slice - libcontainer container kubepods-besteffort-podc9dcd67d_8d2e_47ce_a04c_7e258ad570df.slice. Jul 9 09:30:40.024071 kubelet[2827]: I0709 09:30:40.023977 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qqc\" (UniqueName: \"kubernetes.io/projected/21398084-829e-40b7-b70d-f8032cdbe616-kube-api-access-44qqc\") pod \"calico-kube-controllers-78df7cdc87-6h4ss\" (UID: \"21398084-829e-40b7-b70d-f8032cdbe616\") " pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" Jul 9 09:30:40.028508 kubelet[2827]: I0709 09:30:40.024090 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9dcd67d-8d2e-47ce-a04c-7e258ad570df-calico-apiserver-certs\") pod \"calico-apiserver-ff887dcfb-r29nf\" (UID: \"c9dcd67d-8d2e-47ce-a04c-7e258ad570df\") " pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:40.028508 kubelet[2827]: I0709 09:30:40.024153 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqb44\" (UniqueName: \"kubernetes.io/projected/c9dcd67d-8d2e-47ce-a04c-7e258ad570df-kube-api-access-vqb44\") pod \"calico-apiserver-ff887dcfb-r29nf\" (UID: \"c9dcd67d-8d2e-47ce-a04c-7e258ad570df\") " pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:40.028508 kubelet[2827]: I0709 09:30:40.024329 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/946fed98-1138-4414-9da3-e7f45e864152-config-volume\") pod \"coredns-7c65d6cfc9-l5wr7\" (UID: \"946fed98-1138-4414-9da3-e7f45e864152\") " pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:40.028508 kubelet[2827]: I0709 09:30:40.024520 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21398084-829e-40b7-b70d-f8032cdbe616-tigera-ca-bundle\") pod \"calico-kube-controllers-78df7cdc87-6h4ss\" (UID: \"21398084-829e-40b7-b70d-f8032cdbe616\") " pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" Jul 9 09:30:40.028508 kubelet[2827]: I0709 09:30:40.024564 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52nkn\" (UniqueName: \"kubernetes.io/projected/946fed98-1138-4414-9da3-e7f45e864152-kube-api-access-52nkn\") pod \"coredns-7c65d6cfc9-l5wr7\" (UID: \"946fed98-1138-4414-9da3-e7f45e864152\") " pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:40.038044 systemd[1]: Created slice kubepods-besteffort-pod21398084_829e_40b7_b70d_f8032cdbe616.slice - libcontainer container kubepods-besteffort-pod21398084_829e_40b7_b70d_f8032cdbe616.slice. Jul 9 09:30:40.050783 systemd[1]: Created slice kubepods-besteffort-pod9a052ab5_c109_40e4_93a9_186db270b9e9.slice - libcontainer container kubepods-besteffort-pod9a052ab5_c109_40e4_93a9_186db270b9e9.slice. Jul 9 09:30:40.060029 systemd[1]: Created slice kubepods-besteffort-pod2597fcec_b8aa_4454_b726_837937b88838.slice - libcontainer container kubepods-besteffort-pod2597fcec_b8aa_4454_b726_837937b88838.slice. Jul 9 09:30:40.074043 systemd[1]: Created slice kubepods-besteffort-podfccb8e22_6aea_43af_a47d_ce57c3bb175e.slice - libcontainer container kubepods-besteffort-podfccb8e22_6aea_43af_a47d_ce57c3bb175e.slice. Jul 9 09:30:40.127667 kubelet[2827]: I0709 09:30:40.125726 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9a052ab5-c109-40e4-93a9-186db270b9e9-goldmane-key-pair\") pod \"goldmane-58fd7646b9-st25r\" (UID: \"9a052ab5-c109-40e4-93a9-186db270b9e9\") " pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.127667 kubelet[2827]: I0709 09:30:40.125786 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2597fcec-b8aa-4454-b726-837937b88838-calico-apiserver-certs\") pod \"calico-apiserver-ff887dcfb-677fz\" (UID: \"2597fcec-b8aa-4454-b726-837937b88838\") " pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" Jul 9 09:30:40.127667 kubelet[2827]: I0709 09:30:40.125845 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a052ab5-c109-40e4-93a9-186db270b9e9-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-st25r\" (UID: \"9a052ab5-c109-40e4-93a9-186db270b9e9\") " pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.127667 kubelet[2827]: I0709 09:30:40.125880 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-ca-bundle\") pod \"whisker-5589b96bf7-qx8nq\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:40.127667 kubelet[2827]: I0709 09:30:40.125902 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-backend-key-pair\") pod \"whisker-5589b96bf7-qx8nq\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:40.128086 kubelet[2827]: I0709 09:30:40.125935 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5mtf\" (UniqueName: \"kubernetes.io/projected/fccb8e22-6aea-43af-a47d-ce57c3bb175e-kube-api-access-h5mtf\") pod \"whisker-5589b96bf7-qx8nq\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:40.128086 kubelet[2827]: I0709 09:30:40.125960 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lkl\" (UniqueName: \"kubernetes.io/projected/2597fcec-b8aa-4454-b726-837937b88838-kube-api-access-v5lkl\") pod \"calico-apiserver-ff887dcfb-677fz\" (UID: \"2597fcec-b8aa-4454-b726-837937b88838\") " pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" Jul 9 09:30:40.128086 kubelet[2827]: I0709 09:30:40.126066 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a052ab5-c109-40e4-93a9-186db270b9e9-config\") pod \"goldmane-58fd7646b9-st25r\" (UID: \"9a052ab5-c109-40e4-93a9-186db270b9e9\") " pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.128086 kubelet[2827]: I0709 09:30:40.126098 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjxf\" (UniqueName: \"kubernetes.io/projected/9a052ab5-c109-40e4-93a9-186db270b9e9-kube-api-access-fpjxf\") pod \"goldmane-58fd7646b9-st25r\" (UID: \"9a052ab5-c109-40e4-93a9-186db270b9e9\") " pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.297971 containerd[1547]: time="2025-07-09T09:30:40.297905411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,}" Jul 9 09:30:40.319760 containerd[1547]: time="2025-07-09T09:30:40.319152976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,}" Jul 9 09:30:40.320377 containerd[1547]: time="2025-07-09T09:30:40.320339650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:30:40.361350 containerd[1547]: time="2025-07-09T09:30:40.361279113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:40.373502 containerd[1547]: time="2025-07-09T09:30:40.373100003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df7cdc87-6h4ss,Uid:21398084-829e-40b7-b70d-f8032cdbe616,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:40.376367 containerd[1547]: time="2025-07-09T09:30:40.376315469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-677fz,Uid:2597fcec-b8aa-4454-b726-837937b88838,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:30:40.379292 containerd[1547]: time="2025-07-09T09:30:40.379230511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5589b96bf7-qx8nq,Uid:fccb8e22-6aea-43af-a47d-ce57c3bb175e,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:40.430692 containerd[1547]: time="2025-07-09T09:30:40.430585441Z" level=error msg="Failed to destroy network for sandbox \"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.440917 containerd[1547]: time="2025-07-09T09:30:40.440326624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.441142 kubelet[2827]: E0709 09:30:40.440779 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.441142 kubelet[2827]: E0709 09:30:40.440942 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:40.441142 kubelet[2827]: E0709 09:30:40.441005 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:40.441466 kubelet[2827]: E0709 09:30:40.441368 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-l5wr7_kube-system(946fed98-1138-4414-9da3-e7f45e864152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-l5wr7_kube-system(946fed98-1138-4414-9da3-e7f45e864152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"396bcd32af2f5d9f8fc250430759f958c4901bed0c9e39352ea59e250c7c3d25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-l5wr7" podUID="946fed98-1138-4414-9da3-e7f45e864152" Jul 9 09:30:40.550473 systemd[1]: Created slice kubepods-besteffort-podafe07b1f_0a7b_4bcd_a38a_6e433e4d698a.slice - libcontainer container kubepods-besteffort-podafe07b1f_0a7b_4bcd_a38a_6e433e4d698a.slice. Jul 9 09:30:40.555669 containerd[1547]: time="2025-07-09T09:30:40.555599335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d74jx,Uid:afe07b1f-0a7b-4bcd-a38a-6e433e4d698a,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:40.559869 containerd[1547]: time="2025-07-09T09:30:40.559805176Z" level=error msg="Failed to destroy network for sandbox \"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.564781 containerd[1547]: time="2025-07-09T09:30:40.564356666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.566017 kubelet[2827]: E0709 09:30:40.565182 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.566017 kubelet[2827]: E0709 09:30:40.565259 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:40.566017 kubelet[2827]: E0709 09:30:40.565286 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:40.566182 kubelet[2827]: E0709 09:30:40.565340 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d256k_kube-system(82ad1ad0-e4e6-41c2-8105-2c99716c1c36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d256k_kube-system(82ad1ad0-e4e6-41c2-8105-2c99716c1c36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9160548051a63ac466710a20dd68d3d5c81ed9983140b69e4752c73d0de1fd80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d256k" podUID="82ad1ad0-e4e6-41c2-8105-2c99716c1c36" Jul 9 09:30:40.567845 containerd[1547]: time="2025-07-09T09:30:40.567762116Z" level=error msg="Failed to destroy network for sandbox \"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.570480 containerd[1547]: time="2025-07-09T09:30:40.570378669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-677fz,Uid:2597fcec-b8aa-4454-b726-837937b88838,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.571410 kubelet[2827]: E0709 09:30:40.571196 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.572239 kubelet[2827]: E0709 09:30:40.571534 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" Jul 9 09:30:40.572239 kubelet[2827]: E0709 09:30:40.571805 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" Jul 9 09:30:40.572636 kubelet[2827]: E0709 09:30:40.572337 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff887dcfb-677fz_calico-apiserver(2597fcec-b8aa-4454-b726-837937b88838)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff887dcfb-677fz_calico-apiserver(2597fcec-b8aa-4454-b726-837937b88838)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38d8eb8f9bc94e07395c061e0fba41ab92b0b51786ef54a24d8198a63939aca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" podUID="2597fcec-b8aa-4454-b726-837937b88838" Jul 9 09:30:40.572981 containerd[1547]: time="2025-07-09T09:30:40.572831305Z" level=error msg="Failed to destroy network for sandbox \"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.576003 containerd[1547]: time="2025-07-09T09:30:40.575933578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.576755 kubelet[2827]: E0709 09:30:40.576603 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.577441 kubelet[2827]: E0709 09:30:40.577086 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:40.577441 kubelet[2827]: E0709 09:30:40.577126 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:40.577839 kubelet[2827]: E0709 09:30:40.577438 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff887dcfb-r29nf_calico-apiserver(c9dcd67d-8d2e-47ce-a04c-7e258ad570df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff887dcfb-r29nf_calico-apiserver(c9dcd67d-8d2e-47ce-a04c-7e258ad570df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adfede6a0e98f7c458e2a8cbdd497c624a31fc2fd6d6f78404b76f5e5054165e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" podUID="c9dcd67d-8d2e-47ce-a04c-7e258ad570df" Jul 9 09:30:40.619135 containerd[1547]: time="2025-07-09T09:30:40.618966084Z" level=error msg="Failed to destroy network for sandbox \"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.619959 containerd[1547]: time="2025-07-09T09:30:40.619836926Z" level=error msg="Failed to destroy network for sandbox \"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.622890 containerd[1547]: time="2025-07-09T09:30:40.622827450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5589b96bf7-qx8nq,Uid:fccb8e22-6aea-43af-a47d-ce57c3bb175e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.624238 kubelet[2827]: E0709 09:30:40.624166 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.624399 kubelet[2827]: E0709 09:30:40.624272 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:40.624399 kubelet[2827]: E0709 09:30:40.624296 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:40.624399 kubelet[2827]: E0709 09:30:40.624354 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5589b96bf7-qx8nq_calico-system(fccb8e22-6aea-43af-a47d-ce57c3bb175e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5589b96bf7-qx8nq_calico-system(fccb8e22-6aea-43af-a47d-ce57c3bb175e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a306b6d78ba7d39b7b2cbd66a3e9a0219ad77391277b0ba60d9ed4d2bc5ee97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5589b96bf7-qx8nq" podUID="fccb8e22-6aea-43af-a47d-ce57c3bb175e" Jul 9 09:30:40.627475 containerd[1547]: time="2025-07-09T09:30:40.627386492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.628201 kubelet[2827]: E0709 09:30:40.627717 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.628201 kubelet[2827]: E0709 09:30:40.627789 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.628201 kubelet[2827]: E0709 09:30:40.627812 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:40.628390 kubelet[2827]: E0709 09:30:40.628005 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-st25r_calico-system(9a052ab5-c109-40e4-93a9-186db270b9e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-st25r_calico-system(9a052ab5-c109-40e4-93a9-186db270b9e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e960b6e4c324397be0d68d325d54449ba25c07afb200bf45efeea0f67e2ae252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-st25r" podUID="9a052ab5-c109-40e4-93a9-186db270b9e9" Jul 9 09:30:40.640269 containerd[1547]: time="2025-07-09T09:30:40.640166811Z" level=error msg="Failed to destroy network for sandbox \"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.642957 containerd[1547]: time="2025-07-09T09:30:40.642870807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df7cdc87-6h4ss,Uid:21398084-829e-40b7-b70d-f8032cdbe616,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.643417 kubelet[2827]: E0709 09:30:40.643379 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.643698 kubelet[2827]: E0709 09:30:40.643583 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" Jul 9 09:30:40.643698 kubelet[2827]: E0709 09:30:40.643664 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" Jul 9 09:30:40.643982 kubelet[2827]: E0709 09:30:40.643863 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78df7cdc87-6h4ss_calico-system(21398084-829e-40b7-b70d-f8032cdbe616)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78df7cdc87-6h4ss_calico-system(21398084-829e-40b7-b70d-f8032cdbe616)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e0292c0d12436d758baec77cd204fe0c15f023c0ef7ddfdb4696f4e20325098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" podUID="21398084-829e-40b7-b70d-f8032cdbe616" Jul 9 09:30:40.662590 containerd[1547]: time="2025-07-09T09:30:40.662515698Z" level=error msg="Failed to destroy network for sandbox \"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.665573 containerd[1547]: time="2025-07-09T09:30:40.665528524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d74jx,Uid:afe07b1f-0a7b-4bcd-a38a-6e433e4d698a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.666055 kubelet[2827]: E0709 09:30:40.666002 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:40.666157 kubelet[2827]: E0709 09:30:40.666081 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:40.666157 kubelet[2827]: E0709 09:30:40.666105 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d74jx" Jul 9 09:30:40.666219 kubelet[2827]: E0709 09:30:40.666170 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d74jx_calico-system(afe07b1f-0a7b-4bcd-a38a-6e433e4d698a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d74jx_calico-system(afe07b1f-0a7b-4bcd-a38a-6e433e4d698a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1aceb1e2bee1fa4ea31cd79e101c7817e4713030a6471a71d8ce678cd614a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d74jx" podUID="afe07b1f-0a7b-4bcd-a38a-6e433e4d698a" Jul 9 09:30:40.832270 containerd[1547]: time="2025-07-09T09:30:40.831873796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 9 09:30:51.542930 containerd[1547]: time="2025-07-09T09:30:51.542767458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,}" Jul 9 09:30:51.544924 containerd[1547]: time="2025-07-09T09:30:51.542769912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5589b96bf7-qx8nq,Uid:fccb8e22-6aea-43af-a47d-ce57c3bb175e,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:51.545134 containerd[1547]: time="2025-07-09T09:30:51.543092577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:51.848731 containerd[1547]: time="2025-07-09T09:30:51.847919039Z" level=error msg="Failed to destroy network for sandbox \"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.853390 systemd[1]: run-netns-cni\x2d59a6e1b4\x2db5ff\x2d6af8\x2d7f7c\x2d5d44d6143a84.mount: Deactivated successfully. Jul 9 09:30:51.858867 containerd[1547]: time="2025-07-09T09:30:51.858768113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5589b96bf7-qx8nq,Uid:fccb8e22-6aea-43af-a47d-ce57c3bb175e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.859732 kubelet[2827]: E0709 09:30:51.859153 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.859732 kubelet[2827]: E0709 09:30:51.859293 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:51.859732 kubelet[2827]: E0709 09:30:51.859358 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5589b96bf7-qx8nq" Jul 9 09:30:51.861520 kubelet[2827]: E0709 09:30:51.861311 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5589b96bf7-qx8nq_calico-system(fccb8e22-6aea-43af-a47d-ce57c3bb175e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5589b96bf7-qx8nq_calico-system(fccb8e22-6aea-43af-a47d-ce57c3bb175e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d96996c1a6a3e44e208d58c1a88815241ca33dde07b7c1671bfa504321158974\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5589b96bf7-qx8nq" podUID="fccb8e22-6aea-43af-a47d-ce57c3bb175e" Jul 9 09:30:51.888003 containerd[1547]: time="2025-07-09T09:30:51.887923786Z" level=error msg="Failed to destroy network for sandbox \"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.892175 systemd[1]: run-netns-cni\x2d5c502587\x2dc9e8\x2d470c\x2d4ccf\x2d58bdb0895207.mount: Deactivated successfully. Jul 9 09:30:51.895717 containerd[1547]: time="2025-07-09T09:30:51.895602146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.896172 kubelet[2827]: E0709 09:30:51.896091 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.896256 kubelet[2827]: E0709 09:30:51.896167 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:51.896256 kubelet[2827]: E0709 09:30:51.896199 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-st25r" Jul 9 09:30:51.896386 kubelet[2827]: E0709 09:30:51.896250 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-st25r_calico-system(9a052ab5-c109-40e4-93a9-186db270b9e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-st25r_calico-system(9a052ab5-c109-40e4-93a9-186db270b9e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5ccd4051c9608c093a562d6c08b795ab8a8f18be0979ea49743cd929fd62732\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-st25r" podUID="9a052ab5-c109-40e4-93a9-186db270b9e9" Jul 9 09:30:51.906658 containerd[1547]: time="2025-07-09T09:30:51.906565644Z" level=error msg="Failed to destroy network for sandbox \"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.909374 containerd[1547]: time="2025-07-09T09:30:51.909298185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.909678 kubelet[2827]: E0709 09:30:51.909591 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:51.909802 kubelet[2827]: E0709 09:30:51.909708 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:51.909802 kubelet[2827]: E0709 09:30:51.909734 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-l5wr7" Jul 9 09:30:51.909947 kubelet[2827]: E0709 09:30:51.909812 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-l5wr7_kube-system(946fed98-1138-4414-9da3-e7f45e864152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-l5wr7_kube-system(946fed98-1138-4414-9da3-e7f45e864152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3818d1fe1d729849fc7ecb0fd3882c8e219e5456f2f9393aa257a4af28b96099\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-l5wr7" podUID="946fed98-1138-4414-9da3-e7f45e864152" Jul 9 09:30:52.544249 containerd[1547]: time="2025-07-09T09:30:52.543164454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,}" Jul 9 09:30:52.627356 systemd[1]: run-netns-cni\x2d5015a5be\x2dbcef\x2d8c4a\x2dcb3b\x2da7047e5eed59.mount: Deactivated successfully. Jul 9 09:30:52.699655 containerd[1547]: time="2025-07-09T09:30:52.697760942Z" level=error msg="Failed to destroy network for sandbox \"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:52.701435 systemd[1]: run-netns-cni\x2dd5671090\x2d14cf\x2db3c1\x2d92c1\x2d7146e69d44aa.mount: Deactivated successfully. Jul 9 09:30:52.704445 containerd[1547]: time="2025-07-09T09:30:52.703558607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:52.705981 kubelet[2827]: E0709 09:30:52.705253 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:52.705981 kubelet[2827]: E0709 09:30:52.705391 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:52.705981 kubelet[2827]: E0709 09:30:52.705419 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d256k" Jul 9 09:30:52.706176 kubelet[2827]: E0709 09:30:52.705500 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d256k_kube-system(82ad1ad0-e4e6-41c2-8105-2c99716c1c36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d256k_kube-system(82ad1ad0-e4e6-41c2-8105-2c99716c1c36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fdc9379010529c79e25a9d26096253771f4b2a2949290c0a86097e297d5043d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d256k" podUID="82ad1ad0-e4e6-41c2-8105-2c99716c1c36" Jul 9 09:30:53.285367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738816627.mount: Deactivated successfully. Jul 9 09:30:53.333649 containerd[1547]: time="2025-07-09T09:30:53.333536709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:53.336465 containerd[1547]: time="2025-07-09T09:30:53.336164665Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:53.336465 containerd[1547]: time="2025-07-09T09:30:53.336229326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 9 09:30:53.340683 containerd[1547]: time="2025-07-09T09:30:53.340527793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:30:53.341334 containerd[1547]: time="2025-07-09T09:30:53.341136574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.509172209s" Jul 9 09:30:53.341334 containerd[1547]: time="2025-07-09T09:30:53.341178232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 9 09:30:53.361586 containerd[1547]: time="2025-07-09T09:30:53.358502251Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 9 09:30:53.383064 containerd[1547]: time="2025-07-09T09:30:53.382987142Z" level=info msg="Container bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:30:53.401006 containerd[1547]: time="2025-07-09T09:30:53.400906036Z" level=info msg="CreateContainer within sandbox \"e449e5cb25b0d6ca4eb69425928aacdd824c881ae697ae41bdd5f8d74bdd5b6c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\"" Jul 9 09:30:53.404334 containerd[1547]: time="2025-07-09T09:30:53.403094338Z" level=info msg="StartContainer for \"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\"" Jul 9 09:30:53.406227 containerd[1547]: time="2025-07-09T09:30:53.406200720Z" level=info msg="connecting to shim bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0" address="unix:///run/containerd/s/00975754f33d24a6fc46f482ca20b018cc4ae85b016ab9f27ebf9ddb903ac8f4" protocol=ttrpc version=3 Jul 9 09:30:53.488813 systemd[1]: Started cri-containerd-bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0.scope - libcontainer container bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0. Jul 9 09:30:53.542008 containerd[1547]: time="2025-07-09T09:30:53.541864066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:30:53.560544 containerd[1547]: time="2025-07-09T09:30:53.560496417Z" level=info msg="StartContainer for \"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" returns successfully" Jul 9 09:30:53.650186 containerd[1547]: time="2025-07-09T09:30:53.650110275Z" level=error msg="Failed to destroy network for sandbox \"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:53.653183 systemd[1]: run-netns-cni\x2dbe4dab7b\x2d10f8\x2dc305\x2d967b\x2dc0641a3d19ef.mount: Deactivated successfully. Jul 9 09:30:53.655470 containerd[1547]: time="2025-07-09T09:30:53.655338012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:53.655810 kubelet[2827]: E0709 09:30:53.655761 2827 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 9 09:30:53.656327 kubelet[2827]: E0709 09:30:53.656175 2827 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:53.656327 kubelet[2827]: E0709 09:30:53.656216 2827 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" Jul 9 09:30:53.656823 kubelet[2827]: E0709 09:30:53.656778 2827 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ff887dcfb-r29nf_calico-apiserver(c9dcd67d-8d2e-47ce-a04c-7e258ad570df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ff887dcfb-r29nf_calico-apiserver(c9dcd67d-8d2e-47ce-a04c-7e258ad570df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2948bb868122f582bf5a486ccebaf30002db826de91ee5f7b1dc77ff56fefbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" podUID="c9dcd67d-8d2e-47ce-a04c-7e258ad570df" Jul 9 09:30:53.715425 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 9 09:30:53.715784 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 9 09:30:53.955888 kubelet[2827]: I0709 09:30:53.954944 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fldpr" podStartSLOduration=2.005641021 podStartE2EDuration="29.954693808s" podCreationTimestamp="2025-07-09 09:30:24 +0000 UTC" firstStartedPulling="2025-07-09 09:30:25.394165779 +0000 UTC m=+21.065608818" lastFinishedPulling="2025-07-09 09:30:53.343218566 +0000 UTC m=+49.014661605" observedRunningTime="2025-07-09 09:30:53.944871398 +0000 UTC m=+49.616314437" watchObservedRunningTime="2025-07-09 09:30:53.954693808 +0000 UTC m=+49.626136847" Jul 9 09:30:53.962015 kubelet[2827]: I0709 09:30:53.961928 2827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-ca-bundle\") pod \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " Jul 9 09:30:53.962300 kubelet[2827]: I0709 09:30:53.962282 2827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-backend-key-pair\") pod \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " Jul 9 09:30:53.962992 kubelet[2827]: I0709 09:30:53.962776 2827 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5mtf\" (UniqueName: \"kubernetes.io/projected/fccb8e22-6aea-43af-a47d-ce57c3bb175e-kube-api-access-h5mtf\") pod \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\" (UID: \"fccb8e22-6aea-43af-a47d-ce57c3bb175e\") " Jul 9 09:30:53.979275 kubelet[2827]: I0709 09:30:53.979201 2827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fccb8e22-6aea-43af-a47d-ce57c3bb175e" (UID: "fccb8e22-6aea-43af-a47d-ce57c3bb175e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 09:30:53.984703 systemd[1]: var-lib-kubelet-pods-fccb8e22\x2d6aea\x2d43af\x2da47d\x2dce57c3bb175e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 9 09:30:53.989835 kubelet[2827]: I0709 09:30:53.989770 2827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fccb8e22-6aea-43af-a47d-ce57c3bb175e" (UID: "fccb8e22-6aea-43af-a47d-ce57c3bb175e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 09:30:53.990556 kubelet[2827]: I0709 09:30:53.990525 2827 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccb8e22-6aea-43af-a47d-ce57c3bb175e-kube-api-access-h5mtf" (OuterVolumeSpecName: "kube-api-access-h5mtf") pod "fccb8e22-6aea-43af-a47d-ce57c3bb175e" (UID: "fccb8e22-6aea-43af-a47d-ce57c3bb175e"). InnerVolumeSpecName "kube-api-access-h5mtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 09:30:53.990556 systemd[1]: var-lib-kubelet-pods-fccb8e22\x2d6aea\x2d43af\x2da47d\x2dce57c3bb175e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5mtf.mount: Deactivated successfully. Jul 9 09:30:54.064540 kubelet[2827]: I0709 09:30:54.064393 2827 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5mtf\" (UniqueName: \"kubernetes.io/projected/fccb8e22-6aea-43af-a47d-ce57c3bb175e-kube-api-access-h5mtf\") on node \"ci-4386-0-0-w-15e87cee3a.novalocal\" DevicePath \"\"" Jul 9 09:30:54.064911 kubelet[2827]: I0709 09:30:54.064895 2827 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-ca-bundle\") on node \"ci-4386-0-0-w-15e87cee3a.novalocal\" DevicePath \"\"" Jul 9 09:30:54.065101 kubelet[2827]: I0709 09:30:54.065086 2827 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fccb8e22-6aea-43af-a47d-ce57c3bb175e-whisker-backend-key-pair\") on node \"ci-4386-0-0-w-15e87cee3a.novalocal\" DevicePath \"\"" Jul 9 09:30:54.180220 containerd[1547]: time="2025-07-09T09:30:54.180133520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"fd53da3d16ac48fe3d9d9cc4c76925c5aba5a49bb4adc9f33cc144b39e8a60ae\" pid:4003 exit_status:1 exited_at:{seconds:1752053454 nanos:179096637}" Jul 9 09:30:54.542947 containerd[1547]: time="2025-07-09T09:30:54.542399293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-677fz,Uid:2597fcec-b8aa-4454-b726-837937b88838,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:30:54.563739 systemd[1]: Removed slice kubepods-besteffort-podfccb8e22_6aea_43af_a47d_ce57c3bb175e.slice - libcontainer container kubepods-besteffort-podfccb8e22_6aea_43af_a47d_ce57c3bb175e.slice. Jul 9 09:30:54.934799 systemd-networkd[1435]: cali9fc178eb161: Link UP Jul 9 09:30:54.935539 systemd-networkd[1435]: cali9fc178eb161: Gained carrier Jul 9 09:30:54.972279 containerd[1547]: 2025-07-09 09:30:54.600 [INFO][4031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 09:30:54.972279 containerd[1547]: 2025-07-09 09:30:54.776 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0 calico-apiserver-ff887dcfb- calico-apiserver 2597fcec-b8aa-4454-b726-837937b88838 812 0 2025-07-09 09:30:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff887dcfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal calico-apiserver-ff887dcfb-677fz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9fc178eb161 [] [] }} ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-" Jul 9 09:30:54.972279 containerd[1547]: 2025-07-09 09:30:54.777 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.972279 containerd[1547]: 2025-07-09 09:30:54.842 [INFO][4042] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" HandleID="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.842 [INFO][4042] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" HandleID="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"calico-apiserver-ff887dcfb-677fz", "timestamp":"2025-07-09 09:30:54.842583207 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.842 [INFO][4042] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.842 [INFO][4042] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.843 [INFO][4042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.852 [INFO][4042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.860 [INFO][4042] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.866 [INFO][4042] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.869 [INFO][4042] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.972983 containerd[1547]: 2025-07-09 09:30:54.873 [INFO][4042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.874 [INFO][4042] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.876 [INFO][4042] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.884 [INFO][4042] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.893 [INFO][4042] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.65/26] block=192.168.102.64/26 handle="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.894 [INFO][4042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.65/26] handle="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.894 [INFO][4042] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:30:54.973327 containerd[1547]: 2025-07-09 09:30:54.894 [INFO][4042] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.65/26] IPv6=[] ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" HandleID="k8s-pod-network.136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.974926 containerd[1547]: 2025-07-09 09:30:54.904 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0", GenerateName:"calico-apiserver-ff887dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2597fcec-b8aa-4454-b726-837937b88838", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff887dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"calico-apiserver-ff887dcfb-677fz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fc178eb161", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:54.975032 containerd[1547]: 2025-07-09 09:30:54.907 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.65/32] ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.975032 containerd[1547]: 2025-07-09 09:30:54.907 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fc178eb161 ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.975032 containerd[1547]: 2025-07-09 09:30:54.938 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:54.975144 containerd[1547]: 2025-07-09 09:30:54.939 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0", GenerateName:"calico-apiserver-ff887dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2597fcec-b8aa-4454-b726-837937b88838", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff887dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc", Pod:"calico-apiserver-ff887dcfb-677fz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fc178eb161", MAC:"0a:1b:d5:c8:76:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:54.975217 containerd[1547]: 2025-07-09 09:30:54.967 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-677fz" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--677fz-eth0" Jul 9 09:30:55.070783 systemd[1]: Created slice kubepods-besteffort-poddc861493_17da_428e_8383_c7a34b5ec928.slice - libcontainer container kubepods-besteffort-poddc861493_17da_428e_8383_c7a34b5ec928.slice. Jul 9 09:30:55.086083 containerd[1547]: time="2025-07-09T09:30:55.085882046Z" level=info msg="connecting to shim 136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc" address="unix:///run/containerd/s/05439b1255f5ad35a5deb98479ff4b77ca7eda7bd58193188ace0738453ee455" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:55.151565 systemd[1]: Started cri-containerd-136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc.scope - libcontainer container 136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc. Jul 9 09:30:55.174949 containerd[1547]: time="2025-07-09T09:30:55.174883019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"6fdfedf1bb56ad10703983fb3bcf93ea0532d2387d7272ca6b043048f61730ab\" pid:4064 exit_status:1 exited_at:{seconds:1752053455 nanos:174323721}" Jul 9 09:30:55.179415 kubelet[2827]: I0709 09:30:55.179272 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc861493-17da-428e-8383-c7a34b5ec928-whisker-ca-bundle\") pod \"whisker-785b4d6cbc-7mjrn\" (UID: \"dc861493-17da-428e-8383-c7a34b5ec928\") " pod="calico-system/whisker-785b4d6cbc-7mjrn" Jul 9 09:30:55.179415 kubelet[2827]: I0709 09:30:55.179378 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc861493-17da-428e-8383-c7a34b5ec928-whisker-backend-key-pair\") pod \"whisker-785b4d6cbc-7mjrn\" (UID: \"dc861493-17da-428e-8383-c7a34b5ec928\") " pod="calico-system/whisker-785b4d6cbc-7mjrn" Jul 9 09:30:55.181585 kubelet[2827]: I0709 09:30:55.179532 2827 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnjb\" (UniqueName: \"kubernetes.io/projected/dc861493-17da-428e-8383-c7a34b5ec928-kube-api-access-cmnjb\") pod \"whisker-785b4d6cbc-7mjrn\" (UID: \"dc861493-17da-428e-8383-c7a34b5ec928\") " pod="calico-system/whisker-785b4d6cbc-7mjrn" Jul 9 09:30:55.255276 containerd[1547]: time="2025-07-09T09:30:55.255115779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-677fz,Uid:2597fcec-b8aa-4454-b726-837937b88838,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc\"" Jul 9 09:30:55.260948 containerd[1547]: time="2025-07-09T09:30:55.260792499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 9 09:30:55.383163 containerd[1547]: time="2025-07-09T09:30:55.383101161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785b4d6cbc-7mjrn,Uid:dc861493-17da-428e-8383-c7a34b5ec928,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:55.543191 containerd[1547]: time="2025-07-09T09:30:55.542358866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df7cdc87-6h4ss,Uid:21398084-829e-40b7-b70d-f8032cdbe616,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:55.547395 containerd[1547]: time="2025-07-09T09:30:55.547323150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d74jx,Uid:afe07b1f-0a7b-4bcd-a38a-6e433e4d698a,Namespace:calico-system,Attempt:0,}" Jul 9 09:30:55.916983 systemd-networkd[1435]: cali312ba379a80: Link UP Jul 9 09:30:55.918253 systemd-networkd[1435]: cali312ba379a80: Gained carrier Jul 9 09:30:56.189963 containerd[1547]: 2025-07-09 09:30:55.484 [INFO][4178] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 09:30:56.189963 containerd[1547]: 2025-07-09 09:30:55.505 [INFO][4178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0 whisker-785b4d6cbc- calico-system dc861493-17da-428e-8383-c7a34b5ec928 904 0 2025-07-09 09:30:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:785b4d6cbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal whisker-785b4d6cbc-7mjrn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali312ba379a80 [] [] }} ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-" Jul 9 09:30:56.189963 containerd[1547]: 2025-07-09 09:30:55.505 [INFO][4178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.189963 containerd[1547]: 2025-07-09 09:30:55.621 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" HandleID="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.621 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" HandleID="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"whisker-785b4d6cbc-7mjrn", "timestamp":"2025-07-09 09:30:55.621283784 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.621 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.621 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.621 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.765 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.778 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.793 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.823 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.268955 containerd[1547]: 2025-07-09 09:30:55.834 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.834 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.839 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.862 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.895 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.66/26] block=192.168.102.64/26 handle="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.895 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.66/26] handle="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.895 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:30:56.269751 containerd[1547]: 2025-07-09 09:30:55.895 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.66/26] IPv6=[] ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" HandleID="k8s-pod-network.6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.274001 containerd[1547]: 2025-07-09 09:30:55.905 [INFO][4178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0", GenerateName:"whisker-785b4d6cbc-", Namespace:"calico-system", SelfLink:"", UID:"dc861493-17da-428e-8383-c7a34b5ec928", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"785b4d6cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"whisker-785b4d6cbc-7mjrn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali312ba379a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.274226 containerd[1547]: 2025-07-09 09:30:55.906 [INFO][4178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.66/32] ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.274226 containerd[1547]: 2025-07-09 09:30:55.906 [INFO][4178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali312ba379a80 ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.274226 containerd[1547]: 2025-07-09 09:30:55.919 [INFO][4178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.275237 containerd[1547]: 2025-07-09 09:30:56.149 [INFO][4178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0", GenerateName:"whisker-785b4d6cbc-", Namespace:"calico-system", SelfLink:"", UID:"dc861493-17da-428e-8383-c7a34b5ec928", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"785b4d6cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a", Pod:"whisker-785b4d6cbc-7mjrn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.102.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali312ba379a80", MAC:"aa:da:86:f4:87:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.275413 containerd[1547]: 2025-07-09 09:30:56.177 [INFO][4178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" Namespace="calico-system" Pod="whisker-785b4d6cbc-7mjrn" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-whisker--785b4d6cbc--7mjrn-eth0" Jul 9 09:30:56.433112 systemd-networkd[1435]: cali88bfab9f82c: Link UP Jul 9 09:30:56.436301 systemd-networkd[1435]: cali88bfab9f82c: Gained carrier Jul 9 09:30:56.544587 kubelet[2827]: I0709 09:30:56.544525 2827 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccb8e22-6aea-43af-a47d-ce57c3bb175e" path="/var/lib/kubelet/pods/fccb8e22-6aea-43af-a47d-ce57c3bb175e/volumes" Jul 9 09:30:56.578007 containerd[1547]: 2025-07-09 09:30:55.786 [INFO][4232] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 09:30:56.578007 containerd[1547]: 2025-07-09 09:30:55.836 [INFO][4232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0 calico-kube-controllers-78df7cdc87- calico-system 21398084-829e-40b7-b70d-f8032cdbe616 815 0 2025-07-09 09:30:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78df7cdc87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal calico-kube-controllers-78df7cdc87-6h4ss eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali88bfab9f82c [] [] }} ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-" Jul 9 09:30:56.578007 containerd[1547]: 2025-07-09 09:30:55.839 [INFO][4232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.578007 containerd[1547]: 2025-07-09 09:30:55.961 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" HandleID="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.148 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" HandleID="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"calico-kube-controllers-78df7cdc87-6h4ss", "timestamp":"2025-07-09 09:30:55.961224351 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.149 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.150 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.150 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.190 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.286 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.308 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.319 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.579421 containerd[1547]: 2025-07-09 09:30:56.335 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.335 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.339 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.358 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.419 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.67/26] block=192.168.102.64/26 handle="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.419 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.67/26] handle="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.419 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:30:56.580036 containerd[1547]: 2025-07-09 09:30:56.420 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.67/26] IPv6=[] ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" HandleID="k8s-pod-network.68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.580284 containerd[1547]: 2025-07-09 09:30:56.423 [INFO][4232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0", GenerateName:"calico-kube-controllers-78df7cdc87-", Namespace:"calico-system", SelfLink:"", UID:"21398084-829e-40b7-b70d-f8032cdbe616", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df7cdc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"calico-kube-controllers-78df7cdc87-6h4ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88bfab9f82c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.580398 containerd[1547]: 2025-07-09 09:30:56.424 [INFO][4232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.67/32] ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.580398 containerd[1547]: 2025-07-09 09:30:56.424 [INFO][4232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88bfab9f82c ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.580398 containerd[1547]: 2025-07-09 09:30:56.436 [INFO][4232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.580522 containerd[1547]: 2025-07-09 09:30:56.438 [INFO][4232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0", GenerateName:"calico-kube-controllers-78df7cdc87-", Namespace:"calico-system", SelfLink:"", UID:"21398084-829e-40b7-b70d-f8032cdbe616", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78df7cdc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d", Pod:"calico-kube-controllers-78df7cdc87-6h4ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.102.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88bfab9f82c", MAC:"ba:5a:3f:79:06:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.580605 containerd[1547]: 2025-07-09 09:30:56.572 [INFO][4232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" Namespace="calico-system" Pod="calico-kube-controllers-78df7cdc87-6h4ss" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--kube--controllers--78df7cdc87--6h4ss-eth0" Jul 9 09:30:56.597919 systemd-networkd[1435]: cali9fc178eb161: Gained IPv6LL Jul 9 09:30:56.648346 containerd[1547]: time="2025-07-09T09:30:56.647248696Z" level=info msg="connecting to shim 6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a" address="unix:///run/containerd/s/1d54b06b42ba661e5f2ee19a891c89c73983407e7b34631dcf95aa9c731b0ab7" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:56.696082 containerd[1547]: time="2025-07-09T09:30:56.695886007Z" level=info msg="connecting to shim 68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d" address="unix:///run/containerd/s/963451b63c94f6eeaace71f4027495236ecfbde0883343b4a98fdd96efeb1715" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:56.715812 systemd-networkd[1435]: calia7d0fc0b323: Link UP Jul 9 09:30:56.719899 systemd-networkd[1435]: calia7d0fc0b323: Gained carrier Jul 9 09:30:56.746096 systemd[1]: Started cri-containerd-6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a.scope - libcontainer container 6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a. Jul 9 09:30:56.758582 containerd[1547]: 2025-07-09 09:30:55.662 [INFO][4242] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 9 09:30:56.758582 containerd[1547]: 2025-07-09 09:30:55.772 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0 csi-node-driver- calico-system afe07b1f-0a7b-4bcd-a38a-6e433e4d698a 687 0 2025-07-09 09:30:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal csi-node-driver-d74jx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia7d0fc0b323 [] [] }} ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-" Jul 9 09:30:56.758582 containerd[1547]: 2025-07-09 09:30:55.772 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.758582 containerd[1547]: 2025-07-09 09:30:55.931 [INFO][4260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" HandleID="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.149 [INFO][4260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" HandleID="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b2b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"csi-node-driver-d74jx", "timestamp":"2025-07-09 09:30:55.931353495 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.150 [INFO][4260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.420 [INFO][4260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.421 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.448 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.457 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.599 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.607 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.759147 containerd[1547]: 2025-07-09 09:30:56.614 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.617 [INFO][4260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.646 [INFO][4260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83 Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.666 [INFO][4260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.686 [INFO][4260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.68/26] block=192.168.102.64/26 handle="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.686 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.68/26] handle="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.686 [INFO][4260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:30:56.762057 containerd[1547]: 2025-07-09 09:30:56.686 [INFO][4260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.68/26] IPv6=[] ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" HandleID="k8s-pod-network.98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.762244 containerd[1547]: 2025-07-09 09:30:56.701 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"csi-node-driver-d74jx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7d0fc0b323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.762318 containerd[1547]: 2025-07-09 09:30:56.703 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.68/32] ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.762318 containerd[1547]: 2025-07-09 09:30:56.704 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7d0fc0b323 ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.762318 containerd[1547]: 2025-07-09 09:30:56.720 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.762411 containerd[1547]: 2025-07-09 09:30:56.723 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"afe07b1f-0a7b-4bcd-a38a-6e433e4d698a", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83", Pod:"csi-node-driver-d74jx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.102.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia7d0fc0b323", MAC:"3a:db:49:4f:80:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:30:56.762477 containerd[1547]: 2025-07-09 09:30:56.750 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" Namespace="calico-system" Pod="csi-node-driver-d74jx" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-csi--node--driver--d74jx-eth0" Jul 9 09:30:56.774861 systemd[1]: Started cri-containerd-68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d.scope - libcontainer container 68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d. Jul 9 09:30:56.827262 containerd[1547]: time="2025-07-09T09:30:56.827198488Z" level=info msg="connecting to shim 98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83" address="unix:///run/containerd/s/9f64c8a9a922c007a7fe15cf0f55d516e164e50bba84012e6e7544bbf5cd65f5" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:30:56.878843 systemd[1]: Started cri-containerd-98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83.scope - libcontainer container 98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83. Jul 9 09:30:56.934927 containerd[1547]: time="2025-07-09T09:30:56.934697044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785b4d6cbc-7mjrn,Uid:dc861493-17da-428e-8383-c7a34b5ec928,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a\"" Jul 9 09:30:57.016018 containerd[1547]: time="2025-07-09T09:30:57.015880257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78df7cdc87-6h4ss,Uid:21398084-829e-40b7-b70d-f8032cdbe616,Namespace:calico-system,Attempt:0,} returns sandbox id \"68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d\"" Jul 9 09:30:57.021436 containerd[1547]: time="2025-07-09T09:30:57.019595752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d74jx,Uid:afe07b1f-0a7b-4bcd-a38a-6e433e4d698a,Namespace:calico-system,Attempt:0,} returns sandbox id \"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83\"" Jul 9 09:30:57.219869 systemd-networkd[1435]: vxlan.calico: Link UP Jul 9 09:30:57.219880 systemd-networkd[1435]: vxlan.calico: Gained carrier Jul 9 09:30:57.876966 systemd-networkd[1435]: cali312ba379a80: Gained IPv6LL Jul 9 09:30:57.878365 systemd-networkd[1435]: calia7d0fc0b323: Gained IPv6LL Jul 9 09:30:57.940987 systemd-networkd[1435]: cali88bfab9f82c: Gained IPv6LL Jul 9 09:30:58.581888 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Jul 9 09:31:01.975661 containerd[1547]: time="2025-07-09T09:31:01.975438103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:01.977569 containerd[1547]: time="2025-07-09T09:31:01.977528952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 9 09:31:01.979687 containerd[1547]: time="2025-07-09T09:31:01.979586750Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:01.982555 containerd[1547]: time="2025-07-09T09:31:01.982487267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:01.983361 containerd[1547]: time="2025-07-09T09:31:01.983176248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 6.72230421s" Jul 9 09:31:01.983361 containerd[1547]: time="2025-07-09T09:31:01.983228115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 9 09:31:01.985536 containerd[1547]: time="2025-07-09T09:31:01.985478433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 9 09:31:01.988283 containerd[1547]: time="2025-07-09T09:31:01.988223067Z" level=info msg="CreateContainer within sandbox \"136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 09:31:02.013896 containerd[1547]: time="2025-07-09T09:31:02.013692388Z" level=info msg="Container dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:02.032028 containerd[1547]: time="2025-07-09T09:31:02.031961863Z" level=info msg="CreateContainer within sandbox \"136bc22f4175214be4e534cee87ea9ecc260efdccbe5ec33b2446c27eeefedfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8\"" Jul 9 09:31:02.033608 containerd[1547]: time="2025-07-09T09:31:02.032555766Z" level=info msg="StartContainer for \"dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8\"" Jul 9 09:31:02.034253 containerd[1547]: time="2025-07-09T09:31:02.034226058Z" level=info msg="connecting to shim dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8" address="unix:///run/containerd/s/05439b1255f5ad35a5deb98479ff4b77ca7eda7bd58193188ace0738453ee455" protocol=ttrpc version=3 Jul 9 09:31:02.074849 systemd[1]: Started cri-containerd-dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8.scope - libcontainer container dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8. Jul 9 09:31:02.276510 containerd[1547]: time="2025-07-09T09:31:02.276466012Z" level=info msg="StartContainer for \"dae965f0c3ffa9b2629859ed8175b52083f6de6f2300377735900b635c9106f8\" returns successfully" Jul 9 09:31:02.542355 containerd[1547]: time="2025-07-09T09:31:02.542213861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,}" Jul 9 09:31:02.833267 systemd-networkd[1435]: cali2d5b057cc11: Link UP Jul 9 09:31:02.834850 systemd-networkd[1435]: cali2d5b057cc11: Gained carrier Jul 9 09:31:02.865345 containerd[1547]: 2025-07-09 09:31:02.720 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0 coredns-7c65d6cfc9- kube-system 946fed98-1138-4414-9da3-e7f45e864152 814 0 2025-07-09 09:30:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal coredns-7c65d6cfc9-l5wr7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d5b057cc11 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-" Jul 9 09:31:02.865345 containerd[1547]: 2025-07-09 09:31:02.720 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.865345 containerd[1547]: 2025-07-09 09:31:02.764 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" HandleID="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.765 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" HandleID="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5270), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"coredns-7c65d6cfc9-l5wr7", "timestamp":"2025-07-09 09:31:02.764908108 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.765 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.765 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.765 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.775 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.781 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.789 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.793 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.867123 containerd[1547]: 2025-07-09 09:31:02.797 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.797 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.800 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27 Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.807 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.824 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.69/26] block=192.168.102.64/26 handle="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.824 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.69/26] handle="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.824 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:31:02.868120 containerd[1547]: 2025-07-09 09:31:02.824 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.69/26] IPv6=[] ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" HandleID="k8s-pod-network.0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.828 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"946fed98-1138-4414-9da3-e7f45e864152", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-l5wr7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5b057cc11", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.828 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.69/32] ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.829 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d5b057cc11 ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.835 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.836 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"946fed98-1138-4414-9da3-e7f45e864152", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27", Pod:"coredns-7c65d6cfc9-l5wr7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d5b057cc11", MAC:"0a:d5:0b:fc:fc:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:02.868399 containerd[1547]: 2025-07-09 09:31:02.853 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" Namespace="kube-system" Pod="coredns-7c65d6cfc9-l5wr7" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--l5wr7-eth0" Jul 9 09:31:02.916074 containerd[1547]: time="2025-07-09T09:31:02.914745520Z" level=info msg="connecting to shim 0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27" address="unix:///run/containerd/s/4217724685756b33ddd8ff146c462fd14f7d5afb7f03f33d4ed90569d656a7a1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:31:02.964882 systemd[1]: Started cri-containerd-0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27.scope - libcontainer container 0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27. Jul 9 09:31:03.062807 containerd[1547]: time="2025-07-09T09:31:03.062764584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-l5wr7,Uid:946fed98-1138-4414-9da3-e7f45e864152,Namespace:kube-system,Attempt:0,} returns sandbox id \"0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27\"" Jul 9 09:31:03.068414 containerd[1547]: time="2025-07-09T09:31:03.068370172Z" level=info msg="CreateContainer within sandbox \"0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:31:03.102478 containerd[1547]: time="2025-07-09T09:31:03.102383809Z" level=info msg="Container 99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:03.110931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581095455.mount: Deactivated successfully. Jul 9 09:31:03.121504 containerd[1547]: time="2025-07-09T09:31:03.121431853Z" level=info msg="CreateContainer within sandbox \"0604c5fdcf164c6ad600e7c2052cac67559d2e6ceb5cd60eb9ac05a599374f27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21\"" Jul 9 09:31:03.123658 containerd[1547]: time="2025-07-09T09:31:03.122555269Z" level=info msg="StartContainer for \"99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21\"" Jul 9 09:31:03.123658 containerd[1547]: time="2025-07-09T09:31:03.123543381Z" level=info msg="connecting to shim 99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21" address="unix:///run/containerd/s/4217724685756b33ddd8ff146c462fd14f7d5afb7f03f33d4ed90569d656a7a1" protocol=ttrpc version=3 Jul 9 09:31:03.155838 systemd[1]: Started cri-containerd-99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21.scope - libcontainer container 99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21. Jul 9 09:31:03.204500 containerd[1547]: time="2025-07-09T09:31:03.204370048Z" level=info msg="StartContainer for \"99a41ff560b785ede44f808ebda164f6eac67df89bf0c2340e64afbec70efe21\" returns successfully" Jul 9 09:31:03.962216 kubelet[2827]: I0709 09:31:03.962162 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:31:03.995337 kubelet[2827]: I0709 09:31:03.994318 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ff887dcfb-677fz" podStartSLOduration=36.268727585 podStartE2EDuration="42.994297336s" podCreationTimestamp="2025-07-09 09:30:21 +0000 UTC" firstStartedPulling="2025-07-09 09:30:55.25900594 +0000 UTC m=+50.930448979" lastFinishedPulling="2025-07-09 09:31:01.984575681 +0000 UTC m=+57.656018730" observedRunningTime="2025-07-09 09:31:02.996556109 +0000 UTC m=+58.667999148" watchObservedRunningTime="2025-07-09 09:31:03.994297336 +0000 UTC m=+59.665740375" Jul 9 09:31:04.017464 kubelet[2827]: I0709 09:31:04.017357 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-l5wr7" podStartSLOduration=55.017319159 podStartE2EDuration="55.017319159s" podCreationTimestamp="2025-07-09 09:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:31:03.998814082 +0000 UTC m=+59.670257161" watchObservedRunningTime="2025-07-09 09:31:04.017319159 +0000 UTC m=+59.688762199" Jul 9 09:31:04.853140 systemd-networkd[1435]: cali2d5b057cc11: Gained IPv6LL Jul 9 09:31:06.543259 containerd[1547]: time="2025-07-09T09:31:06.542500766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,}" Jul 9 09:31:06.828141 systemd-networkd[1435]: cali0f51cf382e5: Link UP Jul 9 09:31:06.831771 systemd-networkd[1435]: cali0f51cf382e5: Gained carrier Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.664 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0 goldmane-58fd7646b9- calico-system 9a052ab5-c109-40e4-93a9-186db270b9e9 809 0 2025-07-09 09:30:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal goldmane-58fd7646b9-st25r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0f51cf382e5 [] [] }} ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.664 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.740 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" HandleID="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.740 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" HandleID="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003146f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"goldmane-58fd7646b9-st25r", "timestamp":"2025-07-09 09:31:06.740571973 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.740 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.740 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.740 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.752 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.766 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.775 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.780 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.788 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.788 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.791 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.799 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.816 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.70/26] block=192.168.102.64/26 handle="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.816 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.70/26] handle="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.816 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:31:06.887186 containerd[1547]: 2025-07-09 09:31:06.816 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.70/26] IPv6=[] ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" HandleID="k8s-pod-network.c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.820 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9a052ab5-c109-40e4-93a9-186db270b9e9", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"goldmane-58fd7646b9-st25r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0f51cf382e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.821 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.70/32] ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.821 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f51cf382e5 ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.832 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.832 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"9a052ab5-c109-40e4-93a9-186db270b9e9", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a", Pod:"goldmane-58fd7646b9-st25r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.102.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0f51cf382e5", MAC:"c2:0b:ee:2e:75:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:06.890906 containerd[1547]: 2025-07-09 09:31:06.879 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" Namespace="calico-system" Pod="goldmane-58fd7646b9-st25r" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-goldmane--58fd7646b9--st25r-eth0" Jul 9 09:31:06.992785 containerd[1547]: time="2025-07-09T09:31:06.992286846Z" level=info msg="connecting to shim c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a" address="unix:///run/containerd/s/25ea0f507f7ed91e8501b3fc45a0fe89255e44dd7a2ef605fad57b793e4ae42c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:31:07.053924 systemd[1]: Started cri-containerd-c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a.scope - libcontainer container c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a. Jul 9 09:31:07.191367 containerd[1547]: time="2025-07-09T09:31:07.191315330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-st25r,Uid:9a052ab5-c109-40e4-93a9-186db270b9e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a\"" Jul 9 09:31:07.194675 containerd[1547]: time="2025-07-09T09:31:07.194107375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:07.196126 containerd[1547]: time="2025-07-09T09:31:07.195769741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 9 09:31:07.197345 containerd[1547]: time="2025-07-09T09:31:07.197306452Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:07.207088 containerd[1547]: time="2025-07-09T09:31:07.207027904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:07.208370 containerd[1547]: time="2025-07-09T09:31:07.208335035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 5.222797351s" Jul 9 09:31:07.208452 containerd[1547]: time="2025-07-09T09:31:07.208375862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 9 09:31:07.210786 containerd[1547]: time="2025-07-09T09:31:07.210750783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 9 09:31:07.214293 containerd[1547]: time="2025-07-09T09:31:07.214248650Z" level=info msg="CreateContainer within sandbox \"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 9 09:31:07.233506 containerd[1547]: time="2025-07-09T09:31:07.231614803Z" level=info msg="Container 09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:07.247328 containerd[1547]: time="2025-07-09T09:31:07.247205919Z" level=info msg="CreateContainer within sandbox \"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b\"" Jul 9 09:31:07.248186 containerd[1547]: time="2025-07-09T09:31:07.248095216Z" level=info msg="StartContainer for \"09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b\"" Jul 9 09:31:07.250335 containerd[1547]: time="2025-07-09T09:31:07.250238805Z" level=info msg="connecting to shim 09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b" address="unix:///run/containerd/s/1d54b06b42ba661e5f2ee19a891c89c73983407e7b34631dcf95aa9c731b0ab7" protocol=ttrpc version=3 Jul 9 09:31:07.281829 systemd[1]: Started cri-containerd-09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b.scope - libcontainer container 09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b. Jul 9 09:31:07.363479 containerd[1547]: time="2025-07-09T09:31:07.363398876Z" level=info msg="StartContainer for \"09f2c4e4743a992ea3fb77fa8163c62fb6aea98726f8196423c39bf871c71e3b\" returns successfully" Jul 9 09:31:07.545679 containerd[1547]: time="2025-07-09T09:31:07.545373073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,}" Jul 9 09:31:07.547326 containerd[1547]: time="2025-07-09T09:31:07.547244311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,}" Jul 9 09:31:07.806170 systemd-networkd[1435]: cali27116618fbc: Link UP Jul 9 09:31:07.808134 systemd-networkd[1435]: cali27116618fbc: Gained carrier Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.680 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0 coredns-7c65d6cfc9- kube-system 82ad1ad0-e4e6-41c2-8105-2c99716c1c36 805 0 2025-07-09 09:30:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal coredns-7c65d6cfc9-d256k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27116618fbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.680 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.742 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" HandleID="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.742 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" HandleID="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"coredns-7c65d6cfc9-d256k", "timestamp":"2025-07-09 09:31:07.74251972 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.742 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.742 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.742 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.753 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.764 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.770 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.775 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.778 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.778 [INFO][4860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.780 [INFO][4860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86 Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.786 [INFO][4860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.796 [INFO][4860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.71/26] block=192.168.102.64/26 handle="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.796 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.71/26] handle="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.796 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:31:07.831283 containerd[1547]: 2025-07-09 09:31:07.796 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.71/26] IPv6=[] ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" HandleID="k8s-pod-network.c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.800 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"82ad1ad0-e4e6-41c2-8105-2c99716c1c36", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-d256k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27116618fbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.801 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.71/32] ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.801 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27116618fbc ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.808 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.808 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"82ad1ad0-e4e6-41c2-8105-2c99716c1c36", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86", Pod:"coredns-7c65d6cfc9-d256k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.102.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27116618fbc", MAC:"8e:9d:dd:47:ef:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:07.832661 containerd[1547]: 2025-07-09 09:31:07.827 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d256k" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-coredns--7c65d6cfc9--d256k-eth0" Jul 9 09:31:07.886020 containerd[1547]: time="2025-07-09T09:31:07.885937521Z" level=info msg="connecting to shim c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86" address="unix:///run/containerd/s/fefa49728e326d968db74164ac36dd645e562da8bb85f602378ad37ffaeedebb" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:31:07.939929 systemd[1]: Started cri-containerd-c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86.scope - libcontainer container c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86. Jul 9 09:31:07.963169 systemd-networkd[1435]: calicefb9165c83: Link UP Jul 9 09:31:07.964481 systemd-networkd[1435]: calicefb9165c83: Gained carrier Jul 9 09:31:07.989867 systemd-networkd[1435]: cali0f51cf382e5: Gained IPv6LL Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.685 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0 calico-apiserver-ff887dcfb- calico-apiserver c9dcd67d-8d2e-47ce-a04c-7e258ad570df 816 0 2025-07-09 09:30:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ff887dcfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4386-0-0-w-15e87cee3a.novalocal calico-apiserver-ff887dcfb-r29nf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicefb9165c83 [] [] }} ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.688 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.744 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" HandleID="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.744 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" HandleID="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039bcb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4386-0-0-w-15e87cee3a.novalocal", "pod":"calico-apiserver-ff887dcfb-r29nf", "timestamp":"2025-07-09 09:31:07.744254182 +0000 UTC"}, Hostname:"ci-4386-0-0-w-15e87cee3a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.744 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.796 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.797 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4386-0-0-w-15e87cee3a.novalocal' Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.856 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.879 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.893 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.899 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.908 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.102.64/26 host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.908 [INFO][4858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.102.64/26 handle="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.911 [INFO][4858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7 Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.921 [INFO][4858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.102.64/26 handle="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.952 [INFO][4858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.102.72/26] block=192.168.102.64/26 handle="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.952 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.102.72/26] handle="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" host="ci-4386-0-0-w-15e87cee3a.novalocal" Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.952 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 9 09:31:07.995679 containerd[1547]: 2025-07-09 09:31:07.952 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.102.72/26] IPv6=[] ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" HandleID="k8s-pod-network.87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Workload="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.958 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0", GenerateName:"calico-apiserver-ff887dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9dcd67d-8d2e-47ce-a04c-7e258ad570df", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff887dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"", Pod:"calico-apiserver-ff887dcfb-r29nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicefb9165c83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.958 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.102.72/32] ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.958 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicefb9165c83 ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.965 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.965 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0", GenerateName:"calico-apiserver-ff887dcfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9dcd67d-8d2e-47ce-a04c-7e258ad570df", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.July, 9, 9, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ff887dcfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4386-0-0-w-15e87cee3a.novalocal", ContainerID:"87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7", Pod:"calico-apiserver-ff887dcfb-r29nf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.102.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicefb9165c83", MAC:"ca:ce:e4:04:80:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 9 09:31:07.998146 containerd[1547]: 2025-07-09 09:31:07.991 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" Namespace="calico-apiserver" Pod="calico-apiserver-ff887dcfb-r29nf" WorkloadEndpoint="ci--4386--0--0--w--15e87cee3a.novalocal-k8s-calico--apiserver--ff887dcfb--r29nf-eth0" Jul 9 09:31:08.048332 containerd[1547]: time="2025-07-09T09:31:08.048216158Z" level=info msg="connecting to shim 87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7" address="unix:///run/containerd/s/5b9ec05d3eb2005df180f2c0e8fa5cb91225b36202bfe2fd2c78e971975bcdf1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 09:31:08.062672 containerd[1547]: time="2025-07-09T09:31:08.062510745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d256k,Uid:82ad1ad0-e4e6-41c2-8105-2c99716c1c36,Namespace:kube-system,Attempt:0,} returns sandbox id \"c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86\"" Jul 9 09:31:08.071877 containerd[1547]: time="2025-07-09T09:31:08.071697325Z" level=info msg="CreateContainer within sandbox \"c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:31:08.095704 containerd[1547]: time="2025-07-09T09:31:08.094955033Z" level=info msg="Container 852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:08.098139 systemd[1]: Started cri-containerd-87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7.scope - libcontainer container 87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7. Jul 9 09:31:08.110571 containerd[1547]: time="2025-07-09T09:31:08.110514720Z" level=info msg="CreateContainer within sandbox \"c451238d4a6355401053a61dd8052e9eb5798384a8a8427fe155c75a3cf77d86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a\"" Jul 9 09:31:08.112203 containerd[1547]: time="2025-07-09T09:31:08.112170173Z" level=info msg="StartContainer for \"852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a\"" Jul 9 09:31:08.115071 containerd[1547]: time="2025-07-09T09:31:08.114970974Z" level=info msg="connecting to shim 852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a" address="unix:///run/containerd/s/fefa49728e326d968db74164ac36dd645e562da8bb85f602378ad37ffaeedebb" protocol=ttrpc version=3 Jul 9 09:31:08.155004 systemd[1]: Started cri-containerd-852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a.scope - libcontainer container 852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a. Jul 9 09:31:08.172857 containerd[1547]: time="2025-07-09T09:31:08.172802191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ff887dcfb-r29nf,Uid:c9dcd67d-8d2e-47ce-a04c-7e258ad570df,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7\"" Jul 9 09:31:08.179711 containerd[1547]: time="2025-07-09T09:31:08.179605244Z" level=info msg="CreateContainer within sandbox \"87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 9 09:31:08.209646 containerd[1547]: time="2025-07-09T09:31:08.208711625Z" level=info msg="Container 184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:08.224038 containerd[1547]: time="2025-07-09T09:31:08.223977422Z" level=info msg="CreateContainer within sandbox \"87c2bc0378d64214832bdd68c4f85c4257220ac61220e5846d19e3bd60421ca7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b\"" Jul 9 09:31:08.229876 containerd[1547]: time="2025-07-09T09:31:08.226953500Z" level=info msg="StartContainer for \"184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b\"" Jul 9 09:31:08.233463 containerd[1547]: time="2025-07-09T09:31:08.233322319Z" level=info msg="connecting to shim 184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b" address="unix:///run/containerd/s/5b9ec05d3eb2005df180f2c0e8fa5cb91225b36202bfe2fd2c78e971975bcdf1" protocol=ttrpc version=3 Jul 9 09:31:08.237588 containerd[1547]: time="2025-07-09T09:31:08.237531700Z" level=info msg="StartContainer for \"852c2b696d5f5fff94b9ef734dc883029b61b96e7c42794d051ad994fa4bca7a\" returns successfully" Jul 9 09:31:08.271036 systemd[1]: Started cri-containerd-184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b.scope - libcontainer container 184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b. Jul 9 09:31:08.360778 containerd[1547]: time="2025-07-09T09:31:08.360079401Z" level=info msg="StartContainer for \"184f666117730963a8419ecfc59c7a6bd2fbb3d2af76fd334751efe8dcc3da3b\" returns successfully" Jul 9 09:31:09.078013 systemd-networkd[1435]: cali27116618fbc: Gained IPv6LL Jul 9 09:31:09.089262 kubelet[2827]: I0709 09:31:09.089026 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d256k" podStartSLOduration=60.088487758 podStartE2EDuration="1m0.088487758s" podCreationTimestamp="2025-07-09 09:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:31:09.086855177 +0000 UTC m=+64.758298216" watchObservedRunningTime="2025-07-09 09:31:09.088487758 +0000 UTC m=+64.759930797" Jul 9 09:31:09.090051 kubelet[2827]: I0709 09:31:09.089804 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ff887dcfb-r29nf" podStartSLOduration=48.089794007 podStartE2EDuration="48.089794007s" podCreationTimestamp="2025-07-09 09:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:31:09.051195061 +0000 UTC m=+64.722638160" watchObservedRunningTime="2025-07-09 09:31:09.089794007 +0000 UTC m=+64.761237056" Jul 9 09:31:09.143125 systemd-networkd[1435]: calicefb9165c83: Gained IPv6LL Jul 9 09:31:10.017174 kubelet[2827]: I0709 09:31:10.016923 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:31:15.551320 containerd[1547]: time="2025-07-09T09:31:15.551094183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:15.554033 containerd[1547]: time="2025-07-09T09:31:15.553954155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 9 09:31:15.555582 containerd[1547]: time="2025-07-09T09:31:15.555519989Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:15.560041 containerd[1547]: time="2025-07-09T09:31:15.559878942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:15.560405 containerd[1547]: time="2025-07-09T09:31:15.560350796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 8.348653398s" Jul 9 09:31:15.560762 containerd[1547]: time="2025-07-09T09:31:15.560597959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 9 09:31:15.562935 containerd[1547]: time="2025-07-09T09:31:15.562766094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 9 09:31:15.568490 containerd[1547]: time="2025-07-09T09:31:15.568307642Z" level=info msg="CreateContainer within sandbox \"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 9 09:31:15.588125 containerd[1547]: time="2025-07-09T09:31:15.587987315Z" level=info msg="Container 68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:15.599852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166973065.mount: Deactivated successfully. Jul 9 09:31:15.617713 containerd[1547]: time="2025-07-09T09:31:15.617605067Z" level=info msg="CreateContainer within sandbox \"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce\"" Jul 9 09:31:15.618782 containerd[1547]: time="2025-07-09T09:31:15.618612856Z" level=info msg="StartContainer for \"68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce\"" Jul 9 09:31:15.625976 containerd[1547]: time="2025-07-09T09:31:15.625897843Z" level=info msg="connecting to shim 68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce" address="unix:///run/containerd/s/9f64c8a9a922c007a7fe15cf0f55d516e164e50bba84012e6e7544bbf5cd65f5" protocol=ttrpc version=3 Jul 9 09:31:15.687850 systemd[1]: Started cri-containerd-68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce.scope - libcontainer container 68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce. Jul 9 09:31:15.756466 containerd[1547]: time="2025-07-09T09:31:15.756405239Z" level=info msg="StartContainer for \"68b4c03d284f1a7ca3c32c3bd3ab2cd38d41e395f05786ba24c58d38ff406dce\" returns successfully" Jul 9 09:31:22.421046 containerd[1547]: time="2025-07-09T09:31:22.420968610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"f30d3f746e688d66e00329a813fc954bd85078dfffe0f747d3adbeb29128c685\" pid:5115 exited_at:{seconds:1752053482 nanos:420390667}" Jul 9 09:31:22.475268 kubelet[2827]: I0709 09:31:22.475139 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:31:22.493187 kubelet[2827]: I0709 09:31:22.492470 2827 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 9 09:31:31.121487 containerd[1547]: time="2025-07-09T09:31:31.120251963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:31.121487 containerd[1547]: time="2025-07-09T09:31:31.121119379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 9 09:31:31.123220 containerd[1547]: time="2025-07-09T09:31:31.123186855Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:31.132226 containerd[1547]: time="2025-07-09T09:31:31.132145711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:31.133813 containerd[1547]: time="2025-07-09T09:31:31.133769827Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 15.570939642s" Jul 9 09:31:31.133953 containerd[1547]: time="2025-07-09T09:31:31.133932911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 9 09:31:31.142086 containerd[1547]: time="2025-07-09T09:31:31.141731122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 9 09:31:31.165668 containerd[1547]: time="2025-07-09T09:31:31.165124771Z" level=info msg="CreateContainer within sandbox \"68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 9 09:31:31.187880 containerd[1547]: time="2025-07-09T09:31:31.187828826Z" level=info msg="Container 15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:31.213311 containerd[1547]: time="2025-07-09T09:31:31.211530302Z" level=info msg="CreateContainer within sandbox \"68aa885ea9d06af29b5e90d91152d305fee53c4402f952a573ef48267aae0d2d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\"" Jul 9 09:31:31.214957 containerd[1547]: time="2025-07-09T09:31:31.214912403Z" level=info msg="StartContainer for \"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\"" Jul 9 09:31:31.218186 containerd[1547]: time="2025-07-09T09:31:31.218070463Z" level=info msg="connecting to shim 15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c" address="unix:///run/containerd/s/963451b63c94f6eeaace71f4027495236ecfbde0883343b4a98fdd96efeb1715" protocol=ttrpc version=3 Jul 9 09:31:31.282933 systemd[1]: Started cri-containerd-15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c.scope - libcontainer container 15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c. Jul 9 09:31:31.469808 containerd[1547]: time="2025-07-09T09:31:31.469470811Z" level=info msg="StartContainer for \"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" returns successfully" Jul 9 09:31:32.201303 kubelet[2827]: I0709 09:31:32.199879 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78df7cdc87-6h4ss" podStartSLOduration=33.090672932 podStartE2EDuration="1m7.199822271s" podCreationTimestamp="2025-07-09 09:30:25 +0000 UTC" firstStartedPulling="2025-07-09 09:30:57.028470385 +0000 UTC m=+52.699913424" lastFinishedPulling="2025-07-09 09:31:31.137619714 +0000 UTC m=+86.809062763" observedRunningTime="2025-07-09 09:31:32.189424798 +0000 UTC m=+87.860867867" watchObservedRunningTime="2025-07-09 09:31:32.199822271 +0000 UTC m=+87.871265310" Jul 9 09:31:32.254001 containerd[1547]: time="2025-07-09T09:31:32.253199964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"b91abfcbc01eb4eb445c2d21a4d590445c55fef5c9b0133640565eb0f6f27d76\" pid:5198 exited_at:{seconds:1752053492 nanos:252811085}" Jul 9 09:31:40.554167 containerd[1547]: time="2025-07-09T09:31:40.553696654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"4bd4e6cfa9fde2977a3332a4e84f775718279ae8ddc1c9ac115aed83550fd9a8\" pid:5235 exited_at:{seconds:1752053500 nanos:553242642}" Jul 9 09:31:41.072538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962728728.mount: Deactivated successfully. Jul 9 09:31:42.367377 containerd[1547]: time="2025-07-09T09:31:42.367256409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:42.370063 containerd[1547]: time="2025-07-09T09:31:42.369945941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 9 09:31:42.371133 containerd[1547]: time="2025-07-09T09:31:42.370994196Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:42.375971 containerd[1547]: time="2025-07-09T09:31:42.375920612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:42.376901 containerd[1547]: time="2025-07-09T09:31:42.376867809Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 11.235073298s" Jul 9 09:31:42.377037 containerd[1547]: time="2025-07-09T09:31:42.376920077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 9 09:31:42.380581 containerd[1547]: time="2025-07-09T09:31:42.380551835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 9 09:31:42.382508 containerd[1547]: time="2025-07-09T09:31:42.382474069Z" level=info msg="CreateContainer within sandbox \"c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 9 09:31:42.409385 containerd[1547]: time="2025-07-09T09:31:42.408647011Z" level=info msg="Container fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:42.438536 containerd[1547]: time="2025-07-09T09:31:42.438082559Z" level=info msg="CreateContainer within sandbox \"c1dfc5e4eb78d0c5507afc9f39788dc476a090c20843d834affaa2e2dff3906a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\"" Jul 9 09:31:42.441419 containerd[1547]: time="2025-07-09T09:31:42.441376335Z" level=info msg="StartContainer for \"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\"" Jul 9 09:31:42.444555 containerd[1547]: time="2025-07-09T09:31:42.444221919Z" level=info msg="connecting to shim fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a" address="unix:///run/containerd/s/25ea0f507f7ed91e8501b3fc45a0fe89255e44dd7a2ef605fad57b793e4ae42c" protocol=ttrpc version=3 Jul 9 09:31:42.503037 systemd[1]: Started cri-containerd-fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a.scope - libcontainer container fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a. Jul 9 09:31:42.615905 containerd[1547]: time="2025-07-09T09:31:42.615828851Z" level=info msg="StartContainer for \"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" returns successfully" Jul 9 09:31:43.399455 containerd[1547]: time="2025-07-09T09:31:43.399385558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"61c7acd1b9318209f0c2e90e94fad61831d4843a42b88752f09ef14814d30026\" pid:5299 exit_status:1 exited_at:{seconds:1752053503 nanos:398834624}" Jul 9 09:31:44.400510 containerd[1547]: time="2025-07-09T09:31:44.400347217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"304a2c68e714c2db2bf5b2c69d6804a20722a68355feb9d60fbed5df96f960cf\" pid:5325 exit_status:1 exited_at:{seconds:1752053504 nanos:399921569}" Jul 9 09:31:49.747971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186315105.mount: Deactivated successfully. Jul 9 09:31:49.784793 containerd[1547]: time="2025-07-09T09:31:49.784580046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:49.787027 containerd[1547]: time="2025-07-09T09:31:49.786974736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 9 09:31:49.789449 containerd[1547]: time="2025-07-09T09:31:49.789377470Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:49.793351 containerd[1547]: time="2025-07-09T09:31:49.792966780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:49.794757 containerd[1547]: time="2025-07-09T09:31:49.794704428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 7.414116785s" Jul 9 09:31:49.794940 containerd[1547]: time="2025-07-09T09:31:49.794904643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 9 09:31:49.799496 containerd[1547]: time="2025-07-09T09:31:49.799233831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 9 09:31:49.801242 containerd[1547]: time="2025-07-09T09:31:49.800490266Z" level=info msg="CreateContainer within sandbox \"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 9 09:31:49.817893 containerd[1547]: time="2025-07-09T09:31:49.817827212Z" level=info msg="Container 5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:49.842996 containerd[1547]: time="2025-07-09T09:31:49.842911843Z" level=info msg="CreateContainer within sandbox \"6ee1d3a8f2631809e644ef529c876b16598cb563940c960609a87212085b291a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3\"" Jul 9 09:31:49.844450 containerd[1547]: time="2025-07-09T09:31:49.844359347Z" level=info msg="StartContainer for \"5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3\"" Jul 9 09:31:49.846398 containerd[1547]: time="2025-07-09T09:31:49.846343016Z" level=info msg="connecting to shim 5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3" address="unix:///run/containerd/s/1d54b06b42ba661e5f2ee19a891c89c73983407e7b34631dcf95aa9c731b0ab7" protocol=ttrpc version=3 Jul 9 09:31:49.892946 systemd[1]: Started cri-containerd-5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3.scope - libcontainer container 5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3. Jul 9 09:31:49.999600 containerd[1547]: time="2025-07-09T09:31:49.999350542Z" level=info msg="StartContainer for \"5f0982b76fab09f85700b94cf14ac192a801eb4b00dc105b9c8c6288bdee0ee3\" returns successfully" Jul 9 09:31:50.325825 kubelet[2827]: I0709 09:31:50.325489 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-st25r" podStartSLOduration=51.141486281 podStartE2EDuration="1m26.325316331s" podCreationTimestamp="2025-07-09 09:30:24 +0000 UTC" firstStartedPulling="2025-07-09 09:31:07.195553024 +0000 UTC m=+62.866996063" lastFinishedPulling="2025-07-09 09:31:42.379383074 +0000 UTC m=+98.050826113" observedRunningTime="2025-07-09 09:31:43.312872345 +0000 UTC m=+98.984315384" watchObservedRunningTime="2025-07-09 09:31:50.325316331 +0000 UTC m=+105.996759380" Jul 9 09:31:50.328303 kubelet[2827]: I0709 09:31:50.326184 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-785b4d6cbc-7mjrn" podStartSLOduration=2.466168606 podStartE2EDuration="55.32617499s" podCreationTimestamp="2025-07-09 09:30:55 +0000 UTC" firstStartedPulling="2025-07-09 09:30:56.937726903 +0000 UTC m=+52.609169942" lastFinishedPulling="2025-07-09 09:31:49.797733276 +0000 UTC m=+105.469176326" observedRunningTime="2025-07-09 09:31:50.32397224 +0000 UTC m=+105.995415289" watchObservedRunningTime="2025-07-09 09:31:50.32617499 +0000 UTC m=+105.997618029" Jul 9 09:31:52.394343 containerd[1547]: time="2025-07-09T09:31:52.394264094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"054c708dde4b34ae135eefd6b89b697ff2b1aa48ac8a19fbc0264a7dcf085bd0\" pid:5391 exited_at:{seconds:1752053512 nanos:391680440}" Jul 9 09:31:53.942464 containerd[1547]: time="2025-07-09T09:31:53.942375822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:53.945730 containerd[1547]: time="2025-07-09T09:31:53.945676701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 9 09:31:53.946665 containerd[1547]: time="2025-07-09T09:31:53.946598799Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:53.953775 containerd[1547]: time="2025-07-09T09:31:53.953691558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:31:53.954292 containerd[1547]: time="2025-07-09T09:31:53.954233444Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.154951263s" Jul 9 09:31:53.954292 containerd[1547]: time="2025-07-09T09:31:53.954285702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 9 09:31:53.958951 containerd[1547]: time="2025-07-09T09:31:53.958653801Z" level=info msg="CreateContainer within sandbox \"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 9 09:31:53.976861 containerd[1547]: time="2025-07-09T09:31:53.976808220Z" level=info msg="Container eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71: CDI devices from CRI Config.CDIDevices: []" Jul 9 09:31:54.008124 containerd[1547]: time="2025-07-09T09:31:54.007973382Z" level=info msg="CreateContainer within sandbox \"98d93197c1ace38f3c5b027bc19d992cf1fb2ff273a2216379234756ccb6ca83\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71\"" Jul 9 09:31:54.009663 containerd[1547]: time="2025-07-09T09:31:54.008973097Z" level=info msg="StartContainer for \"eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71\"" Jul 9 09:31:54.012828 containerd[1547]: time="2025-07-09T09:31:54.012795063Z" level=info msg="connecting to shim eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71" address="unix:///run/containerd/s/9f64c8a9a922c007a7fe15cf0f55d516e164e50bba84012e6e7544bbf5cd65f5" protocol=ttrpc version=3 Jul 9 09:31:54.067072 systemd[1]: Started cri-containerd-eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71.scope - libcontainer container eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71. Jul 9 09:31:54.207496 containerd[1547]: time="2025-07-09T09:31:54.207331065Z" level=info msg="StartContainer for \"eb7b47bd528dca961a69800a68441fd667db5643d5f23709fa8d997aaea41d71\" returns successfully" Jul 9 09:31:54.355027 kubelet[2827]: I0709 09:31:54.354834 2827 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-d74jx" podStartSLOduration=32.42576723 podStartE2EDuration="1m29.354760916s" podCreationTimestamp="2025-07-09 09:30:25 +0000 UTC" firstStartedPulling="2025-07-09 09:30:57.027813975 +0000 UTC m=+52.699257014" lastFinishedPulling="2025-07-09 09:31:53.956807661 +0000 UTC m=+109.628250700" observedRunningTime="2025-07-09 09:31:54.35209645 +0000 UTC m=+110.023539499" watchObservedRunningTime="2025-07-09 09:31:54.354760916 +0000 UTC m=+110.026203965" Jul 9 09:31:54.746065 kubelet[2827]: I0709 09:31:54.746004 2827 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 9 09:31:54.746065 kubelet[2827]: I0709 09:31:54.746086 2827 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 9 09:31:57.110658 containerd[1547]: time="2025-07-09T09:31:57.109585160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"49fe8bda13e0558fb40a7523741675efde5627e2dfa0c92ec7be046a81af0ade\" pid:5456 exited_at:{seconds:1752053517 nanos:109338317}" Jul 9 09:32:04.018555 systemd[1]: Started sshd@9-172.24.4.7:22-172.24.4.1:50422.service - OpenSSH per-connection server daemon (172.24.4.1:50422). Jul 9 09:32:05.309775 sshd[5469]: Accepted publickey for core from 172.24.4.1 port 50422 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:05.314801 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:05.329017 systemd-logind[1521]: New session 12 of user core. Jul 9 09:32:05.332888 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 09:32:05.757785 containerd[1547]: time="2025-07-09T09:32:05.757717016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"91e74062d3e541772dac172a6fcad4dc61a0164af6a064feef020249c67358af\" pid:5489 exited_at:{seconds:1752053525 nanos:756589632}" Jul 9 09:32:06.376060 sshd[5474]: Connection closed by 172.24.4.1 port 50422 Jul 9 09:32:06.377155 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:06.383235 systemd-logind[1521]: Session 12 logged out. Waiting for processes to exit. Jul 9 09:32:06.385133 systemd[1]: sshd@9-172.24.4.7:22-172.24.4.1:50422.service: Deactivated successfully. Jul 9 09:32:06.389522 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 09:32:06.395218 systemd-logind[1521]: Removed session 12. Jul 9 09:32:10.459745 containerd[1547]: time="2025-07-09T09:32:10.459320980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"eae42f3fc3a34965e31f880c611a35ca79d35609ba9775ba5a39c68b61254d18\" pid:5540 exited_at:{seconds:1752053530 nanos:458341985}" Jul 9 09:32:10.700669 containerd[1547]: time="2025-07-09T09:32:10.700441347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"e9b0cf501f4dae5b78620053700503f4af7ae458f5d4394e8486855612cad059\" pid:5542 exited_at:{seconds:1752053530 nanos:699730994}" Jul 9 09:32:11.401103 systemd[1]: Started sshd@10-172.24.4.7:22-172.24.4.1:50432.service - OpenSSH per-connection server daemon (172.24.4.1:50432). Jul 9 09:32:12.533095 sshd[5563]: Accepted publickey for core from 172.24.4.1 port 50432 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:12.537658 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:12.552745 systemd-logind[1521]: New session 13 of user core. Jul 9 09:32:12.558968 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 09:32:13.168379 sshd[5566]: Connection closed by 172.24.4.1 port 50432 Jul 9 09:32:13.169841 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:13.176431 systemd[1]: sshd@10-172.24.4.7:22-172.24.4.1:50432.service: Deactivated successfully. Jul 9 09:32:13.181125 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 09:32:13.182506 systemd-logind[1521]: Session 13 logged out. Waiting for processes to exit. Jul 9 09:32:13.185915 systemd-logind[1521]: Removed session 13. Jul 9 09:32:18.189264 systemd[1]: Started sshd@11-172.24.4.7:22-172.24.4.1:54062.service - OpenSSH per-connection server daemon (172.24.4.1:54062). Jul 9 09:32:19.442607 sshd[5585]: Accepted publickey for core from 172.24.4.1 port 54062 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:19.446804 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:19.465269 systemd-logind[1521]: New session 14 of user core. Jul 9 09:32:19.474952 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 09:32:20.278165 sshd[5588]: Connection closed by 172.24.4.1 port 54062 Jul 9 09:32:20.279832 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:20.290592 systemd[1]: sshd@11-172.24.4.7:22-172.24.4.1:54062.service: Deactivated successfully. Jul 9 09:32:20.293550 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 09:32:20.295178 systemd-logind[1521]: Session 14 logged out. Waiting for processes to exit. Jul 9 09:32:20.303756 systemd[1]: Started sshd@12-172.24.4.7:22-172.24.4.1:54076.service - OpenSSH per-connection server daemon (172.24.4.1:54076). Jul 9 09:32:20.308007 systemd-logind[1521]: Removed session 14. Jul 9 09:32:21.687513 sshd[5601]: Accepted publickey for core from 172.24.4.1 port 54076 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:21.691092 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:21.699982 systemd-logind[1521]: New session 15 of user core. Jul 9 09:32:21.716234 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 09:32:22.452202 containerd[1547]: time="2025-07-09T09:32:22.452061308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"6872a5ef8f9f5efd520c2128b100eb5088e4fdfa4896b46370e69c3191c0cc38\" pid:5624 exited_at:{seconds:1752053542 nanos:445669176}" Jul 9 09:32:22.522693 sshd[5604]: Connection closed by 172.24.4.1 port 54076 Jul 9 09:32:22.524031 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:22.533043 systemd[1]: sshd@12-172.24.4.7:22-172.24.4.1:54076.service: Deactivated successfully. Jul 9 09:32:22.536311 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 09:32:22.538886 systemd-logind[1521]: Session 15 logged out. Waiting for processes to exit. Jul 9 09:32:22.543363 systemd[1]: Started sshd@13-172.24.4.7:22-172.24.4.1:54092.service - OpenSSH per-connection server daemon (172.24.4.1:54092). Jul 9 09:32:22.546759 systemd-logind[1521]: Removed session 15. Jul 9 09:32:24.065440 sshd[5640]: Accepted publickey for core from 172.24.4.1 port 54092 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:24.070172 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:24.084045 systemd-logind[1521]: New session 16 of user core. Jul 9 09:32:24.096031 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 09:32:24.845095 sshd[5643]: Connection closed by 172.24.4.1 port 54092 Jul 9 09:32:24.846152 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:24.850604 systemd[1]: sshd@13-172.24.4.7:22-172.24.4.1:54092.service: Deactivated successfully. Jul 9 09:32:24.856148 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 09:32:24.859027 systemd-logind[1521]: Session 16 logged out. Waiting for processes to exit. Jul 9 09:32:24.862201 systemd-logind[1521]: Removed session 16. Jul 9 09:32:29.871866 systemd[1]: Started sshd@14-172.24.4.7:22-172.24.4.1:53836.service - OpenSSH per-connection server daemon (172.24.4.1:53836). Jul 9 09:32:31.158875 sshd[5664]: Accepted publickey for core from 172.24.4.1 port 53836 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:31.161854 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:31.175431 systemd-logind[1521]: New session 17 of user core. Jul 9 09:32:31.187037 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 09:32:31.894005 sshd[5667]: Connection closed by 172.24.4.1 port 53836 Jul 9 09:32:31.895829 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:31.907807 systemd[1]: sshd@14-172.24.4.7:22-172.24.4.1:53836.service: Deactivated successfully. Jul 9 09:32:31.922782 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 09:32:31.928311 systemd-logind[1521]: Session 17 logged out. Waiting for processes to exit. Jul 9 09:32:31.933333 systemd-logind[1521]: Removed session 17. Jul 9 09:32:36.928234 systemd[1]: Started sshd@15-172.24.4.7:22-172.24.4.1:52168.service - OpenSSH per-connection server daemon (172.24.4.1:52168). Jul 9 09:32:38.244942 sshd[5695]: Accepted publickey for core from 172.24.4.1 port 52168 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:38.248531 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:38.265599 systemd-logind[1521]: New session 18 of user core. Jul 9 09:32:38.277000 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 09:32:39.165855 sshd[5699]: Connection closed by 172.24.4.1 port 52168 Jul 9 09:32:39.167273 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:39.174824 systemd[1]: sshd@15-172.24.4.7:22-172.24.4.1:52168.service: Deactivated successfully. Jul 9 09:32:39.180084 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 09:32:39.181793 systemd-logind[1521]: Session 18 logged out. Waiting for processes to exit. Jul 9 09:32:39.185616 systemd-logind[1521]: Removed session 18. Jul 9 09:32:40.514354 containerd[1547]: time="2025-07-09T09:32:40.514228163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"a0c9dc7fcefa344bf0bccdc759ba5cc56ecc356324167b4e8bcfc86dadebc1ed\" pid:5743 exited_at:{seconds:1752053560 nanos:513773838}" Jul 9 09:32:40.565898 containerd[1547]: time="2025-07-09T09:32:40.565836773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"4faef5b14aba92d0757a8a476675cd359e2b250d3bea36f2236faebf9f58eef2\" pid:5735 exited_at:{seconds:1752053560 nanos:565349275}" Jul 9 09:32:44.181123 systemd[1]: Started sshd@16-172.24.4.7:22-172.24.4.1:34914.service - OpenSSH per-connection server daemon (172.24.4.1:34914). Jul 9 09:32:45.389809 sshd[5758]: Accepted publickey for core from 172.24.4.1 port 34914 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:45.392567 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:45.400226 systemd-logind[1521]: New session 19 of user core. Jul 9 09:32:45.405793 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 09:32:46.343653 sshd[5761]: Connection closed by 172.24.4.1 port 34914 Jul 9 09:32:46.345796 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:46.358180 systemd[1]: sshd@16-172.24.4.7:22-172.24.4.1:34914.service: Deactivated successfully. Jul 9 09:32:46.362619 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 09:32:46.371861 systemd-logind[1521]: Session 19 logged out. Waiting for processes to exit. Jul 9 09:32:46.377039 systemd[1]: Started sshd@17-172.24.4.7:22-172.24.4.1:34922.service - OpenSSH per-connection server daemon (172.24.4.1:34922). Jul 9 09:32:46.381559 systemd-logind[1521]: Removed session 19. Jul 9 09:32:47.612141 sshd[5774]: Accepted publickey for core from 172.24.4.1 port 34922 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:47.615445 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:47.624879 systemd-logind[1521]: New session 20 of user core. Jul 9 09:32:47.630885 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 09:32:49.162754 sshd[5777]: Connection closed by 172.24.4.1 port 34922 Jul 9 09:32:49.164008 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:49.175234 systemd[1]: sshd@17-172.24.4.7:22-172.24.4.1:34922.service: Deactivated successfully. Jul 9 09:32:49.177547 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 09:32:49.179957 systemd-logind[1521]: Session 20 logged out. Waiting for processes to exit. Jul 9 09:32:49.184923 systemd[1]: Started sshd@18-172.24.4.7:22-172.24.4.1:34926.service - OpenSSH per-connection server daemon (172.24.4.1:34926). Jul 9 09:32:49.188909 systemd-logind[1521]: Removed session 20. Jul 9 09:32:50.486261 sshd[5787]: Accepted publickey for core from 172.24.4.1 port 34926 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:50.489753 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:50.504861 systemd-logind[1521]: New session 21 of user core. Jul 9 09:32:50.511992 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 09:32:52.687311 containerd[1547]: time="2025-07-09T09:32:52.686785421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"afe3deacad70742b3bfa69cf28ad25d410f9f82ba4f197846235b9c998a3c7b7\" pid:5812 exited_at:{seconds:1752053572 nanos:681821048}" Jul 9 09:32:54.465806 sshd[5790]: Connection closed by 172.24.4.1 port 34926 Jul 9 09:32:54.468490 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:54.488016 systemd[1]: sshd@18-172.24.4.7:22-172.24.4.1:34926.service: Deactivated successfully. Jul 9 09:32:54.496975 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 09:32:54.499829 systemd[1]: session-21.scope: Consumed 934ms CPU time, 81.2M memory peak. Jul 9 09:32:54.503980 systemd-logind[1521]: Session 21 logged out. Waiting for processes to exit. Jul 9 09:32:54.516068 systemd[1]: Started sshd@19-172.24.4.7:22-172.24.4.1:56796.service - OpenSSH per-connection server daemon (172.24.4.1:56796). Jul 9 09:32:54.522098 systemd-logind[1521]: Removed session 21. Jul 9 09:32:55.638217 sshd[5835]: Accepted publickey for core from 172.24.4.1 port 56796 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:55.640907 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:55.655955 systemd-logind[1521]: New session 22 of user core. Jul 9 09:32:55.661881 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 09:32:56.836487 sshd[5838]: Connection closed by 172.24.4.1 port 56796 Jul 9 09:32:56.836081 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:56.864003 systemd[1]: sshd@19-172.24.4.7:22-172.24.4.1:56796.service: Deactivated successfully. Jul 9 09:32:56.872166 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 09:32:56.875705 systemd-logind[1521]: Session 22 logged out. Waiting for processes to exit. Jul 9 09:32:56.880980 systemd[1]: Started sshd@20-172.24.4.7:22-172.24.4.1:56812.service - OpenSSH per-connection server daemon (172.24.4.1:56812). Jul 9 09:32:56.887713 systemd-logind[1521]: Removed session 22. Jul 9 09:32:57.125414 containerd[1547]: time="2025-07-09T09:32:57.125200038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"65af150c0e40c6ba436e7657d0ca982fe3ae2e0bb870cfc38587c9ce6242a4dc\" pid:5864 exited_at:{seconds:1752053577 nanos:124037671}" Jul 9 09:32:58.200765 sshd[5850]: Accepted publickey for core from 172.24.4.1 port 56812 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:32:58.205454 sshd-session[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:32:58.220472 systemd-logind[1521]: New session 23 of user core. Jul 9 09:32:58.227807 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 09:32:59.027048 sshd[5875]: Connection closed by 172.24.4.1 port 56812 Jul 9 09:32:59.027945 sshd-session[5850]: pam_unix(sshd:session): session closed for user core Jul 9 09:32:59.033961 systemd[1]: sshd@20-172.24.4.7:22-172.24.4.1:56812.service: Deactivated successfully. Jul 9 09:32:59.040174 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 09:32:59.043764 systemd-logind[1521]: Session 23 logged out. Waiting for processes to exit. Jul 9 09:32:59.046585 systemd-logind[1521]: Removed session 23. Jul 9 09:33:04.043400 systemd[1]: Started sshd@21-172.24.4.7:22-172.24.4.1:58992.service - OpenSSH per-connection server daemon (172.24.4.1:58992). Jul 9 09:33:05.256099 sshd[5890]: Accepted publickey for core from 172.24.4.1 port 58992 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:33:05.259598 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:33:05.274086 systemd-logind[1521]: New session 24 of user core. Jul 9 09:33:05.287047 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 09:33:05.694461 containerd[1547]: time="2025-07-09T09:33:05.694315668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"b9e138a80df6ceb649952fa6fe6774022e31e1cccca307ed582ec6e36ee93925\" pid:5909 exited_at:{seconds:1752053585 nanos:693677488}" Jul 9 09:33:06.034123 sshd[5895]: Connection closed by 172.24.4.1 port 58992 Jul 9 09:33:06.034867 sshd-session[5890]: pam_unix(sshd:session): session closed for user core Jul 9 09:33:06.042355 systemd[1]: sshd@21-172.24.4.7:22-172.24.4.1:58992.service: Deactivated successfully. Jul 9 09:33:06.045380 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 09:33:06.047417 systemd-logind[1521]: Session 24 logged out. Waiting for processes to exit. Jul 9 09:33:06.051311 systemd-logind[1521]: Removed session 24. Jul 9 09:33:10.457494 containerd[1547]: time="2025-07-09T09:33:10.457439327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"bb338dd4d301919059f760351a5d8e689e4452ed3ec70bcfc9eeb574870aaac9\" pid:5962 exited_at:{seconds:1752053590 nanos:457098666}" Jul 9 09:33:10.496880 containerd[1547]: time="2025-07-09T09:33:10.496825108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"c5b3fe9a6adecc6115b61aa662c0912fbb2d84501e0d20e1088779babea806f3\" pid:5952 exited_at:{seconds:1752053590 nanos:496446836}" Jul 9 09:33:11.059817 systemd[1]: Started sshd@22-172.24.4.7:22-172.24.4.1:59008.service - OpenSSH per-connection server daemon (172.24.4.1:59008). Jul 9 09:33:12.232935 sshd[5976]: Accepted publickey for core from 172.24.4.1 port 59008 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:33:12.237189 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:33:12.247668 systemd-logind[1521]: New session 25 of user core. Jul 9 09:33:12.254792 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 09:33:13.058996 sshd[5979]: Connection closed by 172.24.4.1 port 59008 Jul 9 09:33:13.060908 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Jul 9 09:33:13.069282 systemd[1]: sshd@22-172.24.4.7:22-172.24.4.1:59008.service: Deactivated successfully. Jul 9 09:33:13.075741 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 09:33:13.078204 systemd-logind[1521]: Session 25 logged out. Waiting for processes to exit. Jul 9 09:33:13.081466 systemd-logind[1521]: Removed session 25. Jul 9 09:33:17.892335 update_engine[1525]: I20250709 09:33:17.891552 1525 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 9 09:33:17.892335 update_engine[1525]: I20250709 09:33:17.891739 1525 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 9 09:33:17.894041 update_engine[1525]: I20250709 09:33:17.892243 1525 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 9 09:33:17.895647 update_engine[1525]: I20250709 09:33:17.895428 1525 omaha_request_params.cc:62] Current group set to developer Jul 9 09:33:17.897174 update_engine[1525]: I20250709 09:33:17.896684 1525 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 9 09:33:17.897265 update_engine[1525]: I20250709 09:33:17.897243 1525 update_attempter.cc:643] Scheduling an action processor start. Jul 9 09:33:17.897376 update_engine[1525]: I20250709 09:33:17.897352 1525 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 9 09:33:17.912514 update_engine[1525]: I20250709 09:33:17.911128 1525 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 9 09:33:17.912514 update_engine[1525]: I20250709 09:33:17.911294 1525 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 9 09:33:17.912514 update_engine[1525]: I20250709 09:33:17.911306 1525 omaha_request_action.cc:272] Request: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: Jul 9 09:33:17.912514 update_engine[1525]: I20250709 09:33:17.911327 1525 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 09:33:17.915104 locksmithd[1563]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 9 09:33:17.925065 update_engine[1525]: I20250709 09:33:17.925015 1525 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 09:33:17.925537 update_engine[1525]: I20250709 09:33:17.925495 1525 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 09:33:17.933852 update_engine[1525]: E20250709 09:33:17.933779 1525 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 09:33:17.933997 update_engine[1525]: I20250709 09:33:17.933912 1525 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 9 09:33:18.078565 systemd[1]: Started sshd@23-172.24.4.7:22-172.24.4.1:52402.service - OpenSSH per-connection server daemon (172.24.4.1:52402). Jul 9 09:33:19.268049 sshd[5991]: Accepted publickey for core from 172.24.4.1 port 52402 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:33:19.270588 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:33:19.280597 systemd-logind[1521]: New session 26 of user core. Jul 9 09:33:19.286930 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 09:33:20.004783 sshd[5994]: Connection closed by 172.24.4.1 port 52402 Jul 9 09:33:20.006728 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Jul 9 09:33:20.011776 systemd[1]: sshd@23-172.24.4.7:22-172.24.4.1:52402.service: Deactivated successfully. Jul 9 09:33:20.016433 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 09:33:20.019126 systemd-logind[1521]: Session 26 logged out. Waiting for processes to exit. Jul 9 09:33:20.021962 systemd-logind[1521]: Removed session 26. Jul 9 09:33:22.418920 containerd[1547]: time="2025-07-09T09:33:22.418858877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf575ddfb5b9c9904d4d1aa0816793b98c5f06055dcb078af420b7a7af1e6ba0\" id:\"2d8808311bc10fd2ad5a19163cba3acbb0624ed3063e17cc1eecac484ac2ce3e\" pid:6017 exited_at:{seconds:1752053602 nanos:418071857}" Jul 9 09:33:25.035693 systemd[1]: Started sshd@24-172.24.4.7:22-172.24.4.1:49996.service - OpenSSH per-connection server daemon (172.24.4.1:49996). Jul 9 09:33:26.234825 sshd[6028]: Accepted publickey for core from 172.24.4.1 port 49996 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:33:26.237180 sshd-session[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:33:26.271879 systemd-logind[1521]: New session 27 of user core. Jul 9 09:33:26.279883 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 09:33:27.055017 sshd[6031]: Connection closed by 172.24.4.1 port 49996 Jul 9 09:33:27.055847 sshd-session[6028]: pam_unix(sshd:session): session closed for user core Jul 9 09:33:27.060992 systemd[1]: sshd@24-172.24.4.7:22-172.24.4.1:49996.service: Deactivated successfully. Jul 9 09:33:27.063714 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 09:33:27.067188 systemd-logind[1521]: Session 27 logged out. Waiting for processes to exit. Jul 9 09:33:27.069887 systemd-logind[1521]: Removed session 27. Jul 9 09:33:27.888833 update_engine[1525]: I20250709 09:33:27.887774 1525 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 09:33:27.890059 update_engine[1525]: I20250709 09:33:27.888945 1525 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 09:33:27.890394 update_engine[1525]: I20250709 09:33:27.890261 1525 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 09:33:27.896067 update_engine[1525]: E20250709 09:33:27.895905 1525 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 09:33:27.896275 update_engine[1525]: I20250709 09:33:27.896199 1525 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 9 09:33:32.073025 systemd[1]: Started sshd@25-172.24.4.7:22-172.24.4.1:50000.service - OpenSSH per-connection server daemon (172.24.4.1:50000). Jul 9 09:33:33.545070 sshd[6042]: Accepted publickey for core from 172.24.4.1 port 50000 ssh2: RSA SHA256:7Z60MgsH9FU4JbF/SQ9a6BolKSBdaEiHKkJAV9eqiyI Jul 9 09:33:33.548894 sshd-session[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:33:33.562801 systemd-logind[1521]: New session 28 of user core. Jul 9 09:33:33.577054 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 09:33:34.348691 sshd[6045]: Connection closed by 172.24.4.1 port 50000 Jul 9 09:33:34.348051 sshd-session[6042]: pam_unix(sshd:session): session closed for user core Jul 9 09:33:34.358140 systemd[1]: sshd@25-172.24.4.7:22-172.24.4.1:50000.service: Deactivated successfully. Jul 9 09:33:34.364771 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 09:33:34.367389 systemd-logind[1521]: Session 28 logged out. Waiting for processes to exit. Jul 9 09:33:34.372363 systemd-logind[1521]: Removed session 28. Jul 9 09:33:37.891673 update_engine[1525]: I20250709 09:33:37.890956 1525 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 9 09:33:37.891673 update_engine[1525]: I20250709 09:33:37.891232 1525 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 9 09:33:37.891673 update_engine[1525]: I20250709 09:33:37.891531 1525 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 9 09:33:37.897142 update_engine[1525]: E20250709 09:33:37.897039 1525 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 9 09:33:37.897478 update_engine[1525]: I20250709 09:33:37.897418 1525 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 9 09:33:40.474019 containerd[1547]: time="2025-07-09T09:33:40.473937900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15bdacf558c90eac5cf09345a52752a283ef62fc3e71b0f8acaee3cb3b5dcc2c\" id:\"8a22f6464f7b79b47ce3450072e5ff36b012b6eae417626e36ed89c1aadeb17d\" pid:6097 exited_at:{seconds:1752053620 nanos:472746261}" Jul 9 09:33:40.504741 containerd[1547]: time="2025-07-09T09:33:40.504654387Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa81f3cdc629d42ecb48b8f6757e60a46a90fe57dee6b4498a3cef993fb4d39a\" id:\"44f747951efe6409395561c84ef730aa8bca1ac801fe259399177e3285c4033d\" pid:6085 exited_at:{seconds:1752053620 nanos:503985851}"