Jul 7 00:51:21.981955 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:51:21.981982 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:51:21.981993 kernel: BIOS-provided physical RAM map: Jul 7 00:51:21.982002 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 00:51:21.982010 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 00:51:21.982021 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 00:51:21.982031 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 7 00:51:21.982040 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 7 00:51:21.982049 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 00:51:21.982057 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 00:51:21.982066 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 7 00:51:21.982074 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 00:51:21.982083 kernel: NX (Execute Disable) protection: active Jul 7 00:51:21.982094 kernel: APIC: Static calls initialized Jul 7 00:51:21.982104 kernel: SMBIOS 3.0.0 present. Jul 7 00:51:21.982113 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 7 00:51:21.982122 kernel: Hypervisor detected: KVM Jul 7 00:51:21.982131 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:51:21.982140 kernel: kvm-clock: using sched offset of 3395942054 cycles Jul 7 00:51:21.982152 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:51:21.982161 kernel: tsc: Detected 1996.249 MHz processor Jul 7 00:51:21.982171 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:51:21.982181 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:51:21.982190 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 7 00:51:21.982200 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 00:51:21.982209 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:51:21.982219 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 7 00:51:21.982228 kernel: ACPI: Early table checksum verification disabled Jul 7 00:51:21.982239 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 7 00:51:21.982248 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:51:21.982258 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:51:21.982267 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:51:21.982276 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 7 00:51:21.982286 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:51:21.982295 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:51:21.982304 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 7 00:51:21.982316 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 7 00:51:21.982325 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 7 00:51:21.982334 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 7 00:51:21.982344 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 7 00:51:21.982357 kernel: No NUMA configuration found Jul 7 00:51:21.982366 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 7 00:51:21.982376 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Jul 7 00:51:21.982388 kernel: Zone ranges: Jul 7 00:51:21.982398 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:51:21.982408 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 00:51:21.982417 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 7 00:51:21.982427 kernel: Movable zone start for each node Jul 7 00:51:21.982437 kernel: Early memory node ranges Jul 7 00:51:21.982446 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 00:51:21.982456 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 7 00:51:21.982468 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 7 00:51:21.982478 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 7 00:51:21.982487 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:51:21.982497 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 00:51:21.982507 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 00:51:21.982517 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:51:21.982527 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:51:21.982536 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:51:21.982546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:51:21.982558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:51:21.982568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:51:21.982578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:51:21.982588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:51:21.982597 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:51:21.982607 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 00:51:21.982617 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:51:21.982627 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 7 00:51:21.982636 kernel: Booting paravirtualized kernel on KVM Jul 7 00:51:21.982649 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:51:21.982659 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 00:51:21.982669 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 00:51:21.982679 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 00:51:21.982688 kernel: pcpu-alloc: [0] 0 1 Jul 7 00:51:21.982698 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 00:51:21.982709 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:51:21.982720 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:51:21.982732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:51:21.982742 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:51:21.982752 kernel: Fallback order for Node 0: 0 Jul 7 00:51:21.982761 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 7 00:51:21.982771 kernel: Policy zone: Normal Jul 7 00:51:21.982781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:51:21.982811 kernel: software IO TLB: area num 2. Jul 7 00:51:21.982822 kernel: Memory: 3966216K/4193772K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 227296K reserved, 0K cma-reserved) Jul 7 00:51:21.982832 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 00:51:21.982845 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:51:21.982855 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:51:21.982865 kernel: Dynamic Preempt: voluntary Jul 7 00:51:21.982875 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:51:21.982886 kernel: rcu: RCU event tracing is enabled. Jul 7 00:51:21.982896 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 00:51:21.982906 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:51:21.982916 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:51:21.982926 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:51:21.982938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:51:21.982948 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 00:51:21.982957 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 00:51:21.982967 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:51:21.982977 kernel: Console: colour VGA+ 80x25 Jul 7 00:51:21.982987 kernel: printk: console [tty0] enabled Jul 7 00:51:21.982997 kernel: printk: console [ttyS0] enabled Jul 7 00:51:21.983007 kernel: ACPI: Core revision 20230628 Jul 7 00:51:21.983017 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:51:21.983029 kernel: x2apic enabled Jul 7 00:51:21.983039 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:51:21.983048 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:51:21.983058 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 00:51:21.983068 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 7 00:51:21.983078 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 00:51:21.983088 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 00:51:21.983098 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:51:21.983108 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:51:21.983120 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:51:21.983130 kernel: Speculative Store Bypass: Vulnerable Jul 7 00:51:21.983140 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 7 00:51:21.983149 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:51:21.983159 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:51:21.983176 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:51:21.983188 kernel: landlock: Up and running. Jul 7 00:51:21.983198 kernel: SELinux: Initializing. Jul 7 00:51:21.983209 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:51:21.983219 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:51:21.983230 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 7 00:51:21.983240 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:51:21.983253 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:51:21.983263 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 00:51:21.983274 kernel: Performance Events: AMD PMU driver. Jul 7 00:51:21.983284 kernel: ... version: 0 Jul 7 00:51:21.983294 kernel: ... bit width: 48 Jul 7 00:51:21.983307 kernel: ... generic registers: 4 Jul 7 00:51:21.983317 kernel: ... value mask: 0000ffffffffffff Jul 7 00:51:21.983327 kernel: ... max period: 00007fffffffffff Jul 7 00:51:21.983338 kernel: ... fixed-purpose events: 0 Jul 7 00:51:21.983348 kernel: ... event mask: 000000000000000f Jul 7 00:51:21.983358 kernel: signal: max sigframe size: 1440 Jul 7 00:51:21.983369 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:51:21.983379 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:51:21.983389 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:51:21.983402 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:51:21.983412 kernel: .... node #0, CPUs: #1 Jul 7 00:51:21.983422 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 00:51:21.983432 kernel: smpboot: Max logical packages: 2 Jul 7 00:51:21.983443 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 7 00:51:21.983453 kernel: devtmpfs: initialized Jul 7 00:51:21.983463 kernel: x86/mm: Memory block size: 128MB Jul 7 00:51:21.983474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:51:21.983484 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 00:51:21.983497 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:51:21.983507 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:51:21.983517 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:51:21.983528 kernel: audit: type=2000 audit(1751849481.405:1): state=initialized audit_enabled=0 res=1 Jul 7 00:51:21.983538 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:51:21.983549 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:51:21.983559 kernel: cpuidle: using governor menu Jul 7 00:51:21.983569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:51:21.983579 kernel: dca service started, version 1.12.1 Jul 7 00:51:21.983592 kernel: PCI: Using configuration type 1 for base access Jul 7 00:51:21.983602 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:51:21.983613 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:51:21.983623 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:51:21.983633 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:51:21.983644 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:51:21.983654 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:51:21.983665 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:51:21.983675 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 00:51:21.983687 kernel: ACPI: Interpreter enabled Jul 7 00:51:21.983698 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:51:21.983708 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:51:21.983718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:51:21.983729 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:51:21.983739 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 00:51:21.983749 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:51:21.983929 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:51:21.984049 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 00:51:21.984172 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 00:51:21.984189 kernel: acpiphp: Slot [3] registered Jul 7 00:51:21.984199 kernel: acpiphp: Slot [4] registered Jul 7 00:51:21.984210 kernel: acpiphp: Slot [5] registered Jul 7 00:51:21.984220 kernel: acpiphp: Slot [6] registered Jul 7 00:51:21.984230 kernel: acpiphp: Slot [7] registered Jul 7 00:51:21.984240 kernel: acpiphp: Slot [8] registered Jul 7 00:51:21.984251 kernel: acpiphp: Slot [9] registered Jul 7 00:51:21.984265 kernel: acpiphp: Slot [10] registered Jul 7 00:51:21.984275 kernel: acpiphp: Slot [11] registered Jul 7 00:51:21.984285 kernel: acpiphp: Slot [12] registered Jul 7 00:51:21.984295 kernel: acpiphp: Slot [13] registered Jul 7 00:51:21.984306 kernel: acpiphp: Slot [14] registered Jul 7 00:51:21.984316 kernel: acpiphp: Slot [15] registered Jul 7 00:51:21.984326 kernel: acpiphp: Slot [16] registered Jul 7 00:51:21.984336 kernel: acpiphp: Slot [17] registered Jul 7 00:51:21.984346 kernel: acpiphp: Slot [18] registered Jul 7 00:51:21.984359 kernel: acpiphp: Slot [19] registered Jul 7 00:51:21.984369 kernel: acpiphp: Slot [20] registered Jul 7 00:51:21.984379 kernel: acpiphp: Slot [21] registered Jul 7 00:51:21.984389 kernel: acpiphp: Slot [22] registered Jul 7 00:51:21.984399 kernel: acpiphp: Slot [23] registered Jul 7 00:51:21.984409 kernel: acpiphp: Slot [24] registered Jul 7 00:51:21.984420 kernel: acpiphp: Slot [25] registered Jul 7 00:51:21.984430 kernel: acpiphp: Slot [26] registered Jul 7 00:51:21.984440 kernel: acpiphp: Slot [27] registered Jul 7 00:51:21.984450 kernel: acpiphp: Slot [28] registered Jul 7 00:51:21.984462 kernel: acpiphp: Slot [29] registered Jul 7 00:51:21.984486 kernel: acpiphp: Slot [30] registered Jul 7 00:51:21.984497 kernel: acpiphp: Slot [31] registered Jul 7 00:51:21.984547 kernel: PCI host bridge to bus 0000:00 Jul 7 00:51:21.984682 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:51:21.984782 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:51:21.984899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:51:21.985001 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:51:21.985096 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 7 00:51:21.985189 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:51:21.985313 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 7 00:51:21.985429 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 7 00:51:21.985563 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 7 00:51:21.985691 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 7 00:51:21.985836 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 7 00:51:21.985949 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 7 00:51:21.986055 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 7 00:51:21.986161 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 7 00:51:21.986277 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 7 00:51:21.986480 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 00:51:21.986616 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 00:51:21.986755 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 7 00:51:21.986893 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 7 00:51:21.986995 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jul 7 00:51:21.987094 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 7 00:51:21.987193 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 7 00:51:21.987293 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:51:21.987414 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 00:51:21.987514 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 7 00:51:21.987615 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 7 00:51:21.987714 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jul 7 00:51:21.988267 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 7 00:51:21.988395 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 00:51:21.988503 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 00:51:21.988616 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 7 00:51:21.988722 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 7 00:51:21.989078 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 7 00:51:21.989193 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 7 00:51:21.989300 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 7 00:51:21.989417 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 00:51:21.989529 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 7 00:51:21.989642 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jul 7 00:51:21.989748 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jul 7 00:51:21.989764 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:51:21.989775 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:51:21.989786 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:51:21.989855 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:51:21.989866 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 00:51:21.989877 kernel: iommu: Default domain type: Translated Jul 7 00:51:21.989891 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:51:21.989901 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:51:21.989912 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:51:21.989922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 00:51:21.989933 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 7 00:51:21.990042 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 00:51:21.990147 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 00:51:21.990251 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:51:21.990267 kernel: vgaarb: loaded Jul 7 00:51:21.990281 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:51:21.990292 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:51:21.990302 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:51:21.990313 kernel: pnp: PnP ACPI init Jul 7 00:51:21.990419 kernel: pnp 00:03: [dma 2] Jul 7 00:51:21.990436 kernel: pnp: PnP ACPI: found 5 devices Jul 7 00:51:21.990447 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:51:21.990458 kernel: NET: Registered PF_INET protocol family Jul 7 00:51:21.990472 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:51:21.990482 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:51:21.990493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:51:21.990969 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:51:21.990987 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:51:21.990998 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:51:21.991008 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:51:21.991019 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:51:21.991029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:51:21.991045 kernel: NET: Registered PF_XDP protocol family Jul 7 00:51:21.991152 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:51:21.991256 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:51:21.991349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:51:21.991441 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 7 00:51:21.991533 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 7 00:51:21.991641 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 00:51:21.991750 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 00:51:21.992864 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:51:21.992878 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 00:51:21.992889 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 7 00:51:21.992900 kernel: Initialise system trusted keyrings Jul 7 00:51:21.992911 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:51:21.992921 kernel: Key type asymmetric registered Jul 7 00:51:21.992932 kernel: Asymmetric key parser 'x509' registered Jul 7 00:51:21.992942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:51:21.992953 kernel: io scheduler mq-deadline registered Jul 7 00:51:21.992968 kernel: io scheduler kyber registered Jul 7 00:51:21.992978 kernel: io scheduler bfq registered Jul 7 00:51:21.992989 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:51:21.993000 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 00:51:21.993011 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 00:51:21.993021 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 00:51:21.993032 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 00:51:21.993042 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:51:21.993053 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:51:21.993066 kernel: random: crng init done Jul 7 00:51:21.993076 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:51:21.993087 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:51:21.993097 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:51:21.993216 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:51:21.993233 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:51:21.993328 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:51:21.993423 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:51:21 UTC (1751849481) Jul 7 00:51:21.993523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 00:51:21.993539 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:51:21.993550 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:51:21.993561 kernel: Segment Routing with IPv6 Jul 7 00:51:21.993571 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:51:21.993582 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:51:21.993593 kernel: Key type dns_resolver registered Jul 7 00:51:21.993603 kernel: IPI shorthand broadcast: enabled Jul 7 00:51:21.993614 kernel: sched_clock: Marking stable (1093007698, 167729183)->(1295433896, -34697015) Jul 7 00:51:21.993628 kernel: registered taskstats version 1 Jul 7 00:51:21.993638 kernel: Loading compiled-in X.509 certificates Jul 7 00:51:21.994829 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:51:21.994842 kernel: Key type .fscrypt registered Jul 7 00:51:21.994852 kernel: Key type fscrypt-provisioning registered Jul 7 00:51:21.994863 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:51:21.994874 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:51:21.994885 kernel: ima: No architecture policies found Jul 7 00:51:21.994899 kernel: clk: Disabling unused clocks Jul 7 00:51:21.994909 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:51:21.994920 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:51:21.994931 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:51:21.994941 kernel: Run /init as init process Jul 7 00:51:21.994951 kernel: with arguments: Jul 7 00:51:21.994962 kernel: /init Jul 7 00:51:21.994973 kernel: with environment: Jul 7 00:51:21.994983 kernel: HOME=/ Jul 7 00:51:21.994993 kernel: TERM=linux Jul 7 00:51:21.995006 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:51:21.995020 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:51:21.995034 systemd[1]: Detected virtualization kvm. Jul 7 00:51:21.995046 systemd[1]: Detected architecture x86-64. Jul 7 00:51:21.995057 systemd[1]: Running in initrd. Jul 7 00:51:21.995069 systemd[1]: No hostname configured, using default hostname. Jul 7 00:51:21.995080 systemd[1]: Hostname set to . Jul 7 00:51:21.995094 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:51:21.995106 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:51:21.995117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:51:21.995129 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:51:21.995141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:51:21.995152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:51:21.995164 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:51:21.995186 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:51:21.995202 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:51:21.995214 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:51:21.995226 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:51:21.995238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:51:21.995252 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:51:21.995263 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:51:21.995275 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:51:21.995287 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:51:21.995299 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:51:21.995310 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:51:21.995322 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:51:21.995334 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:51:21.995346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:51:21.995360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:51:21.995372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:51:21.995384 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:51:21.995396 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:51:21.995408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:51:21.995420 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:51:21.995431 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:51:21.995443 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:51:21.995457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:51:21.995468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:21.995505 systemd-journald[184]: Collecting audit messages is disabled. Jul 7 00:51:21.995535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:51:21.995550 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:51:21.995562 systemd-journald[184]: Journal started Jul 7 00:51:21.995589 systemd-journald[184]: Runtime Journal (/run/log/journal/dee3a1b0c5124232b5648c775e1ef68c) is 8.0M, max 78.3M, 70.3M free. Jul 7 00:51:21.986710 systemd-modules-load[185]: Inserted module 'overlay' Jul 7 00:51:22.003968 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:51:22.006244 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:51:22.052748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:51:22.052775 kernel: Bridge firewalling registered Jul 7 00:51:22.029036 systemd-modules-load[185]: Inserted module 'br_netfilter' Jul 7 00:51:22.052426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:51:22.054493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:22.062046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:51:22.065359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:51:22.067965 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:51:22.076470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:51:22.082842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:51:22.088291 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:51:22.096004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:51:22.097970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:51:22.098695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:51:22.109829 dracut-cmdline[212]: dracut-dracut-053 Jul 7 00:51:22.112546 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:51:22.119084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:51:22.122979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:51:22.146115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:51:22.161051 systemd-resolved[219]: Positive Trust Anchors: Jul 7 00:51:22.161067 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:51:22.161118 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:51:22.164655 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 7 00:51:22.166564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:51:22.169243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:51:22.203850 kernel: SCSI subsystem initialized Jul 7 00:51:22.214825 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:51:22.227830 kernel: iscsi: registered transport (tcp) Jul 7 00:51:22.253328 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:51:22.253387 kernel: QLogic iSCSI HBA Driver Jul 7 00:51:22.287017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:51:22.294966 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:51:22.323686 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:51:22.323752 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:51:22.325883 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:51:22.367827 kernel: raid6: sse2x4 gen() 12054 MB/s Jul 7 00:51:22.385845 kernel: raid6: sse2x2 gen() 13633 MB/s Jul 7 00:51:22.404296 kernel: raid6: sse2x1 gen() 9253 MB/s Jul 7 00:51:22.404333 kernel: raid6: using algorithm sse2x2 gen() 13633 MB/s Jul 7 00:51:22.423317 kernel: raid6: .... xor() 8808 MB/s, rmw enabled Jul 7 00:51:22.423353 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 00:51:22.446911 kernel: xor: measuring software checksum speed Jul 7 00:51:22.446988 kernel: prefetch64-sse : 16933 MB/sec Jul 7 00:51:22.450823 kernel: generic_sse : 13763 MB/sec Jul 7 00:51:22.450878 kernel: xor: using function: prefetch64-sse (16933 MB/sec) Jul 7 00:51:22.633881 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:51:22.648848 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:51:22.654959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:51:22.694604 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jul 7 00:51:22.704932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:51:22.714112 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:51:22.746322 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jul 7 00:51:22.788905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:51:22.798038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:51:22.844345 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:51:22.859566 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:51:22.901419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:51:22.903756 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:51:22.905118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:51:22.906364 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:51:22.912994 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:51:22.925816 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 7 00:51:22.932611 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:51:22.951168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:51:22.956817 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 7 00:51:22.956989 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:51:22.951309 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:51:22.966444 kernel: GPT:17805311 != 20971519 Jul 7 00:51:22.966461 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:51:22.966473 kernel: GPT:17805311 != 20971519 Jul 7 00:51:22.966484 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:51:22.966501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:51:22.954723 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:51:22.955398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:51:22.955528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:22.964631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:22.976251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:23.000826 kernel: libata version 3.00 loaded. Jul 7 00:51:23.001829 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (454) Jul 7 00:51:23.006302 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 00:51:23.007814 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (465) Jul 7 00:51:23.020826 kernel: scsi host0: ata_piix Jul 7 00:51:23.020996 kernel: scsi host1: ata_piix Jul 7 00:51:23.021876 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 7 00:51:23.021891 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 7 00:51:23.022156 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:51:23.066087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:23.077419 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:51:23.083929 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:51:23.094051 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:51:23.095355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:51:23.102925 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:51:23.105943 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:51:23.124199 disk-uuid[501]: Primary Header is updated. Jul 7 00:51:23.124199 disk-uuid[501]: Secondary Entries is updated. Jul 7 00:51:23.124199 disk-uuid[501]: Secondary Header is updated. Jul 7 00:51:23.130209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:51:23.135812 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:51:23.143889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:51:23.155904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:51:24.168886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:51:24.169731 disk-uuid[509]: The operation has completed successfully. Jul 7 00:51:24.254709 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:51:24.254915 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:51:24.276927 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:51:24.293394 sh[524]: Success Jul 7 00:51:24.325858 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 7 00:51:24.416415 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:51:24.432082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:51:24.442327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:51:24.481874 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:51:24.481959 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:51:24.485922 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:51:24.492107 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:51:24.496853 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:51:24.517899 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:51:24.520313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:51:24.527302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:51:24.537253 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:51:24.563882 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:51:24.570528 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:51:24.570592 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:51:24.583955 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:51:24.608136 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:51:24.614778 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:51:24.633138 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:51:24.645014 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:51:24.690639 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:51:24.696944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:51:24.733138 systemd-networkd[707]: lo: Link UP Jul 7 00:51:24.733147 systemd-networkd[707]: lo: Gained carrier Jul 7 00:51:24.735781 systemd-networkd[707]: Enumeration completed Jul 7 00:51:24.736506 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:51:24.736760 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:51:24.736764 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:51:24.737947 systemd-networkd[707]: eth0: Link UP Jul 7 00:51:24.737951 systemd-networkd[707]: eth0: Gained carrier Jul 7 00:51:24.737958 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:51:24.739103 systemd[1]: Reached target network.target - Network. Jul 7 00:51:24.751877 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.161/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 00:51:24.799016 ignition[649]: Ignition 2.19.0 Jul 7 00:51:24.799037 ignition[649]: Stage: fetch-offline Jul 7 00:51:24.799107 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:24.801038 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:51:24.799124 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:24.799253 ignition[649]: parsed url from cmdline: "" Jul 7 00:51:24.799257 ignition[649]: no config URL provided Jul 7 00:51:24.799263 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:51:24.799272 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:51:24.799277 ignition[649]: failed to fetch config: resource requires networking Jul 7 00:51:24.799493 ignition[649]: Ignition finished successfully Jul 7 00:51:24.816972 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 00:51:24.830950 ignition[715]: Ignition 2.19.0 Jul 7 00:51:24.830964 ignition[715]: Stage: fetch Jul 7 00:51:24.831136 ignition[715]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:24.831147 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:24.831240 ignition[715]: parsed url from cmdline: "" Jul 7 00:51:24.831243 ignition[715]: no config URL provided Jul 7 00:51:24.831249 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:51:24.831257 ignition[715]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:51:24.831412 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 00:51:24.831428 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 00:51:24.831517 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 00:51:25.109919 ignition[715]: GET result: OK Jul 7 00:51:25.110174 ignition[715]: parsing config with SHA512: d58ba739635c73d47801f27f64e2d327b7be479efff7666886b2ee2b8c5e7e73a9088bfb2b4de8f5f2510c1771fe85bf5439bdda8e4062148c09d3b8b0551f5f Jul 7 00:51:25.125740 unknown[715]: fetched base config from "system" Jul 7 00:51:25.125852 unknown[715]: fetched base config from "system" Jul 7 00:51:25.127696 ignition[715]: fetch: fetch complete Jul 7 00:51:25.125877 unknown[715]: fetched user config from "openstack" Jul 7 00:51:25.127710 ignition[715]: fetch: fetch passed Jul 7 00:51:25.132928 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 00:51:25.127843 ignition[715]: Ignition finished successfully Jul 7 00:51:25.146245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:51:25.216744 ignition[721]: Ignition 2.19.0 Jul 7 00:51:25.216772 ignition[721]: Stage: kargs Jul 7 00:51:25.217238 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:25.217267 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:25.219891 ignition[721]: kargs: kargs passed Jul 7 00:51:25.222487 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:51:25.219992 ignition[721]: Ignition finished successfully Jul 7 00:51:25.232415 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:51:25.262641 ignition[727]: Ignition 2.19.0 Jul 7 00:51:25.262669 ignition[727]: Stage: disks Jul 7 00:51:25.263189 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:25.263217 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:25.266357 ignition[727]: disks: disks passed Jul 7 00:51:25.266504 ignition[727]: Ignition finished successfully Jul 7 00:51:25.268553 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:51:25.269561 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:51:25.271112 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:51:25.272983 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:51:25.274933 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:51:25.276702 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:51:25.294248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:51:25.325825 systemd-fsck[735]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 00:51:25.339997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:51:25.352314 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:51:25.547660 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:51:25.547983 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:51:25.549186 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:51:25.557048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:51:25.562055 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:51:25.565477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:51:25.573250 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (743) Jul 7 00:51:25.573276 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:51:25.575259 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:51:25.575285 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:51:25.575146 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 00:51:25.587725 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:51:25.588447 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:51:25.588489 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:51:25.593890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:51:25.594431 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:51:25.605980 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:51:25.710177 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:51:25.718939 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:51:25.733109 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:51:25.745106 initrd-setup-root[792]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:51:25.962746 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:51:25.973140 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:51:25.983193 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:51:26.003723 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:51:26.005250 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:51:26.031406 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:51:26.062893 ignition[860]: INFO : Ignition 2.19.0 Jul 7 00:51:26.062893 ignition[860]: INFO : Stage: mount Jul 7 00:51:26.064205 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:26.064205 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:26.066385 ignition[860]: INFO : mount: mount passed Jul 7 00:51:26.066385 ignition[860]: INFO : Ignition finished successfully Jul 7 00:51:26.065884 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:51:26.233512 systemd-networkd[707]: eth0: Gained IPv6LL Jul 7 00:51:32.846374 coreos-metadata[745]: Jul 07 00:51:32.846 WARN failed to locate config-drive, using the metadata service API instead Jul 7 00:51:32.907024 coreos-metadata[745]: Jul 07 00:51:32.905 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 00:51:32.924988 coreos-metadata[745]: Jul 07 00:51:32.924 INFO Fetch successful Jul 7 00:51:32.926534 coreos-metadata[745]: Jul 07 00:51:32.926 INFO wrote hostname ci-4081-3-4-7-8dfaddf5bb.novalocal to /sysroot/etc/hostname Jul 7 00:51:32.935192 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 00:51:32.935984 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 00:51:32.952247 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:51:33.007279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:51:33.028994 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (876) Jul 7 00:51:33.037837 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:51:33.037906 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:51:33.042202 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:51:33.056888 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:51:33.065400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:51:33.138947 ignition[894]: INFO : Ignition 2.19.0 Jul 7 00:51:33.138947 ignition[894]: INFO : Stage: files Jul 7 00:51:33.142202 ignition[894]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:33.142202 ignition[894]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:33.148217 ignition[894]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:51:33.148217 ignition[894]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:51:33.148217 ignition[894]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:51:33.153963 ignition[894]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:51:33.153963 ignition[894]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:51:33.153963 ignition[894]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:51:33.153769 unknown[894]: wrote ssh authorized keys file for user: core Jul 7 00:51:33.161452 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 00:51:33.161452 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 00:51:33.161452 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:51:33.161452 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 00:51:33.223850 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 00:51:33.604986 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 00:51:33.604986 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:51:33.604986 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:51:33.604986 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:51:33.615031 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 00:51:34.484480 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 00:51:36.843043 ignition[894]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 00:51:36.843043 ignition[894]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:51:36.852323 ignition[894]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:51:36.852323 ignition[894]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:51:36.852323 ignition[894]: INFO : files: files passed Jul 7 00:51:36.852323 ignition[894]: INFO : Ignition finished successfully Jul 7 00:51:36.861061 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:51:36.879200 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:51:36.884392 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:51:36.901946 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:51:36.902455 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:51:36.925956 initrd-setup-root-after-ignition[923]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:51:36.927650 initrd-setup-root-after-ignition[923]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:51:36.930005 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:51:36.941310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:51:36.944989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:51:36.952131 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:51:37.001195 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:51:37.001735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:51:37.006377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:51:37.009612 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:51:37.012219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:51:37.017170 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:51:37.056362 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:51:37.068401 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:51:37.116270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:51:37.121633 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:51:37.124070 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:51:37.128077 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:51:37.128534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:51:37.131668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:51:37.133481 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:51:37.136452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:51:37.139299 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:51:37.142261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:51:37.146027 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:51:37.150181 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:51:37.154367 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:51:37.158306 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:51:37.162204 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:51:37.165980 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:51:37.166532 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:51:37.170042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:51:37.172159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:51:37.174696 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:51:37.177340 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:51:37.179690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:51:37.180285 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:51:37.183558 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:51:37.183942 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:51:37.185776 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:51:37.186121 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:51:37.196432 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:51:37.199643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:51:37.202148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:51:37.211529 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:51:37.212146 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:51:37.213965 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:51:37.219007 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:51:37.219921 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:51:37.230186 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:51:37.230342 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:51:37.234157 ignition[949]: INFO : Ignition 2.19.0 Jul 7 00:51:37.235152 ignition[949]: INFO : Stage: umount Jul 7 00:51:37.235152 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:51:37.235152 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 00:51:37.238560 ignition[949]: INFO : umount: umount passed Jul 7 00:51:37.239086 ignition[949]: INFO : Ignition finished successfully Jul 7 00:51:37.241170 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:51:37.241305 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:51:37.243380 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:51:37.243449 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:51:37.244330 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:51:37.244413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:51:37.247233 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 00:51:37.247280 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 00:51:37.248322 systemd[1]: Stopped target network.target - Network. Jul 7 00:51:37.249328 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:51:37.249413 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:51:37.250432 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:51:37.252171 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:51:37.255869 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:51:37.257040 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:51:37.258392 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:51:37.260084 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:51:37.260144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:51:37.260644 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:51:37.260682 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:51:37.261193 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:51:37.261249 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:51:37.261745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:51:37.261812 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:51:37.263091 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:51:37.265158 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:51:37.267557 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:51:37.268135 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:51:37.268238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:51:37.269830 systemd-networkd[707]: eth0: DHCPv6 lease lost Jul 7 00:51:37.271516 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:51:37.271633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:51:37.275009 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:51:37.275569 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:51:37.277372 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:51:37.277590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:51:37.286290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:51:37.286392 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:51:37.290923 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:51:37.292058 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:51:37.292770 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:51:37.294149 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:51:37.294221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:51:37.294753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:51:37.295458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:51:37.296185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:51:37.296229 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:51:37.298112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:51:37.309834 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:51:37.310610 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:51:37.312228 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:51:37.312314 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:51:37.314931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:51:37.315683 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:51:37.317018 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:51:37.317081 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:51:37.318338 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:51:37.318415 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:51:37.319933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:51:37.319983 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:51:37.321155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:51:37.321203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:51:37.330963 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:51:37.332226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:51:37.332280 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:51:37.332859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:51:37.332912 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:37.338223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:51:37.338331 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:51:37.339849 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:51:37.346013 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:51:37.360666 systemd[1]: Switching root. Jul 7 00:51:37.396488 systemd-journald[184]: Journal stopped Jul 7 00:51:39.534454 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 7 00:51:39.539421 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:51:39.539516 kernel: SELinux: policy capability open_perms=1 Jul 7 00:51:39.539566 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:51:39.539597 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:51:39.539642 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:51:39.539677 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:51:39.539689 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:51:39.539719 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:51:39.539777 kernel: audit: type=1403 audit(1751849498.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:51:39.539913 systemd[1]: Successfully loaded SELinux policy in 82.846ms. Jul 7 00:51:39.540059 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.659ms. Jul 7 00:51:39.540094 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:51:39.540139 systemd[1]: Detected virtualization kvm. Jul 7 00:51:39.540183 systemd[1]: Detected architecture x86-64. Jul 7 00:51:39.540196 systemd[1]: Detected first boot. Jul 7 00:51:39.540242 systemd[1]: Hostname set to . Jul 7 00:51:39.540299 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:51:39.540337 zram_generator::config[1008]: No configuration found. Jul 7 00:51:39.540400 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:51:39.540414 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:51:39.540466 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:51:39.540507 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:51:39.540526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:51:39.540561 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:51:39.540596 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:51:39.540650 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:51:39.540682 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:51:39.540695 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:51:39.540712 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:51:39.540759 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:51:39.540773 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:51:39.540926 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:51:39.540963 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:51:39.541032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:51:39.541067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:51:39.541111 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:51:39.541124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:51:39.541142 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:51:39.541154 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:51:39.541190 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:51:39.541270 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:51:39.541342 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:51:39.541384 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:51:39.541405 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:51:39.541423 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:51:39.541442 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:51:39.541454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:51:39.541487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:51:39.541514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:51:39.541549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:51:39.541608 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:51:39.541627 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:51:39.541663 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:51:39.541695 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:39.543251 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:51:39.543274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:51:39.543286 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:51:39.543345 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:51:39.543365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:51:39.543399 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:51:39.543427 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:51:39.543440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:51:39.543451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:51:39.543467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:51:39.543482 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:51:39.543494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:51:39.543534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:51:39.543549 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 00:51:39.543600 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 00:51:39.543614 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:51:39.543625 kernel: ACPI: bus type drm_connector registered Jul 7 00:51:39.543637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:51:39.543648 kernel: fuse: init (API version 7.39) Jul 7 00:51:39.543668 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:51:39.543710 kernel: loop: module loaded Jul 7 00:51:39.543745 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:51:39.543759 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:51:39.543814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:39.543877 systemd-journald[1116]: Collecting audit messages is disabled. Jul 7 00:51:39.543982 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:51:39.543998 systemd-journald[1116]: Journal started Jul 7 00:51:39.544071 systemd-journald[1116]: Runtime Journal (/run/log/journal/dee3a1b0c5124232b5648c775e1ef68c) is 8.0M, max 78.3M, 70.3M free. Jul 7 00:51:39.553836 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:51:39.555157 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:51:39.556206 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:51:39.556932 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:51:39.557596 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:51:39.558469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:51:39.559408 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:51:39.560455 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:51:39.561359 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:51:39.561667 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:51:39.562538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:51:39.562690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:51:39.563840 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:51:39.563991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:51:39.564923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:51:39.565150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:51:39.566117 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:51:39.566411 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:51:39.567263 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:51:39.567521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:51:39.568668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:51:39.569583 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:51:39.571247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:51:39.583838 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:51:39.592011 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:51:39.596927 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:51:39.599205 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:51:39.612035 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:51:39.627868 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:51:39.628522 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:51:39.631067 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:51:39.632717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:51:39.643987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:51:39.649979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:51:39.660178 systemd-journald[1116]: Time spent on flushing to /var/log/journal/dee3a1b0c5124232b5648c775e1ef68c is 53.187ms for 929 entries. Jul 7 00:51:39.660178 systemd-journald[1116]: System Journal (/var/log/journal/dee3a1b0c5124232b5648c775e1ef68c) is 8.0M, max 584.8M, 576.8M free. Jul 7 00:51:39.749095 systemd-journald[1116]: Received client request to flush runtime journal. Jul 7 00:51:39.666261 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:51:39.667103 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:51:39.689871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:51:39.690722 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:51:39.737141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:51:39.746216 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jul 7 00:51:39.746241 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Jul 7 00:51:39.752328 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:51:39.763588 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:51:39.765426 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:51:39.777045 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:51:39.782962 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:51:39.809603 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 00:51:39.832400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:51:39.845968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:51:39.862957 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 7 00:51:39.862979 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 7 00:51:39.869275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:51:40.436149 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:51:40.446147 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:51:40.473986 systemd-udevd[1192]: Using default interface naming scheme 'v255'. Jul 7 00:51:40.504773 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:51:40.522028 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:51:40.576268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:51:40.586263 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 00:51:40.641840 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1201) Jul 7 00:51:40.680077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:51:40.700155 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:51:40.769820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 7 00:51:40.790858 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:51:40.805913 systemd-networkd[1200]: lo: Link UP Jul 7 00:51:40.806378 systemd-networkd[1200]: lo: Gained carrier Jul 7 00:51:40.809543 systemd-networkd[1200]: Enumeration completed Jul 7 00:51:40.809740 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:51:40.812008 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:51:40.812634 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:51:40.818832 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 00:51:40.817295 systemd-networkd[1200]: eth0: Link UP Jul 7 00:51:40.817300 systemd-networkd[1200]: eth0: Gained carrier Jul 7 00:51:40.817319 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:51:40.822275 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:51:40.831870 systemd-networkd[1200]: eth0: DHCPv4 address 172.24.4.161/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 00:51:40.833828 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 00:51:40.839094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:40.852821 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:51:40.854833 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 00:51:40.854868 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 00:51:40.860033 kernel: Console: switching to colour dummy device 80x25 Jul 7 00:51:40.863074 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 00:51:40.863170 kernel: [drm] features: -context_init Jul 7 00:51:40.867368 kernel: [drm] number of scanouts: 1 Jul 7 00:51:40.867409 kernel: [drm] number of cap sets: 0 Jul 7 00:51:40.869119 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:51:40.869389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:40.873866 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 7 00:51:40.882054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:40.910982 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 00:51:40.911074 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 00:51:40.920185 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 00:51:40.920660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:51:40.920947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:40.927019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:51:40.938079 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:51:40.946095 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:51:40.971510 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:51:41.003275 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:51:41.004193 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:51:41.014038 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:51:41.019103 lvm[1249]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:51:41.040356 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:51:41.042676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:51:41.046612 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:51:41.048931 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:51:41.048978 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:51:41.049157 systemd[1]: Reached target machines.target - Containers. Jul 7 00:51:41.051152 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:51:41.056945 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:51:41.060003 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:51:41.061283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:51:41.071015 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:51:41.076956 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:51:41.082753 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:51:41.085340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:51:41.099540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:51:41.129832 kernel: loop0: detected capacity change from 0 to 221472 Jul 7 00:51:41.145326 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:51:41.147130 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:51:41.183025 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:51:41.218672 kernel: loop1: detected capacity change from 0 to 8 Jul 7 00:51:41.247279 kernel: loop2: detected capacity change from 0 to 142488 Jul 7 00:51:41.355856 kernel: loop3: detected capacity change from 0 to 140768 Jul 7 00:51:41.424911 kernel: loop4: detected capacity change from 0 to 221472 Jul 7 00:51:41.463890 kernel: loop5: detected capacity change from 0 to 8 Jul 7 00:51:41.475975 kernel: loop6: detected capacity change from 0 to 142488 Jul 7 00:51:41.545182 kernel: loop7: detected capacity change from 0 to 140768 Jul 7 00:51:41.589900 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 00:51:41.590588 (sd-merge)[1273]: Merged extensions into '/usr'. Jul 7 00:51:41.596494 systemd[1]: Reloading requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:51:41.596876 systemd[1]: Reloading... Jul 7 00:51:41.673859 zram_generator::config[1298]: No configuration found. Jul 7 00:51:41.891729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:51:41.965550 systemd[1]: Reloading finished in 368 ms. Jul 7 00:51:41.982240 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:51:41.996933 systemd[1]: Starting ensure-sysext.service... Jul 7 00:51:42.005566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:51:42.015958 systemd[1]: Reloading requested from client PID 1362 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:51:42.015980 systemd[1]: Reloading... Jul 7 00:51:42.055237 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:51:42.056336 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:51:42.057714 systemd-tmpfiles[1363]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:51:42.058207 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jul 7 00:51:42.061289 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jul 7 00:51:42.068655 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:51:42.071054 systemd-tmpfiles[1363]: Skipping /boot Jul 7 00:51:42.083046 systemd-tmpfiles[1363]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:51:42.084676 systemd-tmpfiles[1363]: Skipping /boot Jul 7 00:51:42.103042 zram_generator::config[1392]: No configuration found. Jul 7 00:51:42.110366 ldconfig[1256]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:51:42.273907 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:51:42.347708 systemd[1]: Reloading finished in 331 ms. Jul 7 00:51:42.359351 systemd-networkd[1200]: eth0: Gained IPv6LL Jul 7 00:51:42.364610 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:51:42.367761 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:51:42.368774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:51:42.394003 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:51:42.409973 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:51:42.416958 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:51:42.421338 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:51:42.433979 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:51:42.449078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.449279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:51:42.454060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:51:42.467085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:51:42.476126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:51:42.478836 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:51:42.478986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.486912 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.487168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:51:42.487411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:51:42.487574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.492138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:51:42.495023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:51:42.500281 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:51:42.500521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:51:42.509876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:51:42.517329 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:51:42.517653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:51:42.529004 systemd[1]: Finished ensure-sysext.service. Jul 7 00:51:42.534400 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:51:42.539096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.539451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:51:42.545165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:51:42.557168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:51:42.564992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:51:42.572381 augenrules[1499]: No rules Jul 7 00:51:42.580992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:51:42.587961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:51:42.598621 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:51:42.619973 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:51:42.621613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:51:42.622507 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:51:42.626512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:51:42.632910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:51:42.633755 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:51:42.633956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:51:42.634649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:51:42.637180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:51:42.640386 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:51:42.640586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:51:42.653655 systemd-resolved[1470]: Positive Trust Anchors: Jul 7 00:51:42.654087 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:51:42.654198 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:51:42.655266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:51:42.655350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:51:42.661219 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:51:42.662301 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:51:42.667686 systemd-resolved[1470]: Using system hostname 'ci-4081-3-4-7-8dfaddf5bb.novalocal'. Jul 7 00:51:42.669716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:51:42.673364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:51:42.679940 systemd[1]: Reached target network.target - Network. Jul 7 00:51:42.680483 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:51:42.683489 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:51:42.728921 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:51:42.730450 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:51:42.733368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:51:42.735225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:51:42.736898 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:51:42.738148 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:51:42.738254 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:51:42.740422 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:51:42.742681 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:51:42.745004 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:51:42.747684 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:51:42.750941 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:51:42.754885 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:51:42.759464 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:51:42.767189 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:51:42.771233 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:51:42.775671 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:51:42.780666 systemd[1]: System is tainted: cgroupsv1 Jul 7 00:51:42.780839 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:51:42.780920 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:51:42.789968 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:51:42.798086 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 00:51:42.807168 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:51:42.823046 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:51:42.831682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:51:42.835442 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:51:43.681674 systemd-timesyncd[1509]: Contacted time server 198.46.254.130:123 (0.flatcar.pool.ntp.org). Jul 7 00:51:43.681959 systemd-resolved[1470]: Clock change detected. Flushing caches. Jul 7 00:51:43.684546 jq[1532]: false Jul 7 00:51:43.683740 systemd-timesyncd[1509]: Initial clock synchronization to Mon 2025-07-07 00:51:43.681507 UTC. Jul 7 00:51:43.693533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:51:43.702227 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:51:43.712375 dbus-daemon[1531]: [system] SELinux support is enabled Jul 7 00:51:43.714714 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:51:43.724531 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:51:43.732687 extend-filesystems[1533]: Found loop4 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found loop5 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found loop6 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found loop7 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda1 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda2 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda3 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found usr Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda4 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda6 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda7 Jul 7 00:51:43.744803 extend-filesystems[1533]: Found vda9 Jul 7 00:51:43.744803 extend-filesystems[1533]: Checking size of /dev/vda9 Jul 7 00:51:43.737535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:51:43.754569 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:51:43.790389 extend-filesystems[1533]: Resized partition /dev/vda9 Jul 7 00:51:43.789555 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:51:43.790687 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:51:43.797386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1198) Jul 7 00:51:43.797474 extend-filesystems[1559]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:51:43.806567 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 7 00:51:43.817282 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 7 00:51:43.817634 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:51:43.835481 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:51:43.836909 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:51:43.854796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:51:43.874968 jq[1565]: true Jul 7 00:51:43.855119 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:51:43.857644 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:51:43.857896 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:51:43.868716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:51:43.882739 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:51:43.882739 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:51:43.882739 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 7 00:51:43.885769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:51:43.907903 update_engine[1562]: I20250707 00:51:43.900319 1562 main.cc:92] Flatcar Update Engine starting Jul 7 00:51:43.919606 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Jul 7 00:51:43.886073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:51:43.924134 update_engine[1562]: I20250707 00:51:43.919503 1562 update_check_scheduler.cc:74] Next update check in 3m7s Jul 7 00:51:43.909611 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:51:43.909897 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:51:43.935852 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:51:43.962400 jq[1578]: true Jul 7 00:51:43.982381 tar[1574]: linux-amd64/helm Jul 7 00:51:43.987597 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:51:43.991076 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:51:43.995948 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:51:43.995985 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:51:43.996631 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:51:43.996648 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:51:43.999989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:51:44.005552 systemd-logind[1555]: New seat seat0. Jul 7 00:51:44.007604 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:51:44.029033 systemd-logind[1555]: Watching system buttons on /dev/input/event1 (Power Button) Jul 7 00:51:44.029063 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:51:44.051145 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:51:44.138391 bash[1608]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:51:44.139999 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:51:44.154862 systemd[1]: Starting sshkeys.service... Jul 7 00:51:44.219363 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 00:51:44.232767 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 00:51:44.310762 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:51:44.328477 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:51:44.354003 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:51:44.370739 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:51:44.383174 systemd[1]: Started sshd@0-172.24.4.161:22-172.24.4.1:36788.service - OpenSSH per-connection server daemon (172.24.4.1:36788). Jul 7 00:51:44.410832 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:51:44.411123 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:51:44.421788 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:51:44.462736 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:51:44.482796 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:51:44.500147 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:51:44.505602 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:51:44.521247 containerd[1579]: time="2025-07-07T00:51:44.520022394Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:51:44.565057 containerd[1579]: time="2025-07-07T00:51:44.564751323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.568932 containerd[1579]: time="2025-07-07T00:51:44.568679480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:51:44.568932 containerd[1579]: time="2025-07-07T00:51:44.568726468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:51:44.568932 containerd[1579]: time="2025-07-07T00:51:44.568745524Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:51:44.569051 containerd[1579]: time="2025-07-07T00:51:44.568950418Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:51:44.569051 containerd[1579]: time="2025-07-07T00:51:44.568972179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.569051 containerd[1579]: time="2025-07-07T00:51:44.569042311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:51:44.569133 containerd[1579]: time="2025-07-07T00:51:44.569059433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569552798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569580480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569596219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569608042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569694173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.570014 containerd[1579]: time="2025-07-07T00:51:44.569939443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:51:44.571844 containerd[1579]: time="2025-07-07T00:51:44.571011143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:51:44.571844 containerd[1579]: time="2025-07-07T00:51:44.571038304Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:51:44.571844 containerd[1579]: time="2025-07-07T00:51:44.571447532Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:51:44.571844 containerd[1579]: time="2025-07-07T00:51:44.571504739Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:51:44.586265 containerd[1579]: time="2025-07-07T00:51:44.586227632Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:51:44.586317 containerd[1579]: time="2025-07-07T00:51:44.586302562Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:51:44.586372 containerd[1579]: time="2025-07-07T00:51:44.586325746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:51:44.586415 containerd[1579]: time="2025-07-07T00:51:44.586396198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:51:44.586446 containerd[1579]: time="2025-07-07T00:51:44.586425232Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:51:44.586650 containerd[1579]: time="2025-07-07T00:51:44.586609878Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:51:44.587049 containerd[1579]: time="2025-07-07T00:51:44.587019306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:51:44.587186 containerd[1579]: time="2025-07-07T00:51:44.587158117Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:51:44.587219 containerd[1579]: time="2025-07-07T00:51:44.587186550Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:51:44.587219 containerd[1579]: time="2025-07-07T00:51:44.587203342Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:51:44.587269 containerd[1579]: time="2025-07-07T00:51:44.587221806Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587269 containerd[1579]: time="2025-07-07T00:51:44.587239980Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587269 containerd[1579]: time="2025-07-07T00:51:44.587254548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587332 containerd[1579]: time="2025-07-07T00:51:44.587270968Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587332 containerd[1579]: time="2025-07-07T00:51:44.587289233Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587332 containerd[1579]: time="2025-07-07T00:51:44.587305563Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587332 containerd[1579]: time="2025-07-07T00:51:44.587320982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587439 containerd[1579]: time="2025-07-07T00:51:44.587336211Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:51:44.587439 containerd[1579]: time="2025-07-07T00:51:44.587408025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587439 containerd[1579]: time="2025-07-07T00:51:44.587424707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587510 containerd[1579]: time="2025-07-07T00:51:44.587439294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587510 containerd[1579]: time="2025-07-07T00:51:44.587456045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587510 containerd[1579]: time="2025-07-07T00:51:44.587481643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587510 containerd[1579]: time="2025-07-07T00:51:44.587498886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587512862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587529062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587544602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587561864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587575920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587596 containerd[1579]: time="2025-07-07T00:51:44.587589997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587604825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587624281Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587646964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587661010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587674205Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:51:44.587721 containerd[1579]: time="2025-07-07T00:51:44.587719540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587739768Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587753093Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587768632Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587780654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587794951Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587806413Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:51:44.587855 containerd[1579]: time="2025-07-07T00:51:44.587819888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:51:44.588852 containerd[1579]: time="2025-07-07T00:51:44.588118097Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:51:44.588852 containerd[1579]: time="2025-07-07T00:51:44.588196534Z" level=info msg="Connect containerd service" Jul 7 00:51:44.588852 containerd[1579]: time="2025-07-07T00:51:44.588240757Z" level=info msg="using legacy CRI server" Jul 7 00:51:44.588852 containerd[1579]: time="2025-07-07T00:51:44.588249884Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:51:44.588852 containerd[1579]: time="2025-07-07T00:51:44.588476640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.589111911Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.589910118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.589966023Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590030574Z" level=info msg="Start subscribing containerd event" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590069838Z" level=info msg="Start recovering state" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590126153Z" level=info msg="Start event monitor" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590142975Z" level=info msg="Start snapshots syncer" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590153875Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:51:44.590825 containerd[1579]: time="2025-07-07T00:51:44.590161870Z" level=info msg="Start streaming server" Jul 7 00:51:44.590383 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:51:44.596396 containerd[1579]: time="2025-07-07T00:51:44.596131005Z" level=info msg="containerd successfully booted in 0.079291s" Jul 7 00:51:44.852508 tar[1574]: linux-amd64/LICENSE Jul 7 00:51:44.852763 tar[1574]: linux-amd64/README.md Jul 7 00:51:44.871975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:51:45.876629 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 36788 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:51:45.889533 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:51:45.927020 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:51:45.956829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:51:45.961547 systemd-logind[1555]: New session 1 of user core. Jul 7 00:51:46.019564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:51:46.034024 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:51:46.042697 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:51:46.181432 systemd[1660]: Queued start job for default target default.target. Jul 7 00:51:46.182307 systemd[1660]: Created slice app.slice - User Application Slice. Jul 7 00:51:46.182470 systemd[1660]: Reached target paths.target - Paths. Jul 7 00:51:46.182593 systemd[1660]: Reached target timers.target - Timers. Jul 7 00:51:46.192573 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:51:46.200307 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:51:46.200989 systemd[1660]: Reached target sockets.target - Sockets. Jul 7 00:51:46.201008 systemd[1660]: Reached target basic.target - Basic System. Jul 7 00:51:46.201064 systemd[1660]: Reached target default.target - Main User Target. Jul 7 00:51:46.201108 systemd[1660]: Startup finished in 150ms. Jul 7 00:51:46.204606 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:51:46.215697 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:51:46.397795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:51:46.420393 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:51:46.674148 systemd[1]: Started sshd@1-172.24.4.161:22-172.24.4.1:36804.service - OpenSSH per-connection server daemon (172.24.4.1:36804). Jul 7 00:51:48.103545 sshd[1681]: Accepted publickey for core from 172.24.4.1 port 36804 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:51:48.106078 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:51:48.117979 systemd-logind[1555]: New session 2 of user core. Jul 7 00:51:48.132776 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:51:48.273121 kubelet[1679]: E0707 00:51:48.271112 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:51:48.277925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:51:48.279582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:51:48.745242 sshd[1681]: pam_unix(sshd:session): session closed for user core Jul 7 00:51:48.761178 systemd[1]: Started sshd@2-172.24.4.161:22-172.24.4.1:36818.service - OpenSSH per-connection server daemon (172.24.4.1:36818). Jul 7 00:51:48.774981 systemd[1]: sshd@1-172.24.4.161:22-172.24.4.1:36804.service: Deactivated successfully. Jul 7 00:51:48.780957 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:51:48.783709 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:51:48.789275 systemd-logind[1555]: Removed session 2. Jul 7 00:51:49.549282 login[1643]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:51:49.569065 systemd-logind[1555]: New session 3 of user core. Jul 7 00:51:49.577165 login[1645]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 00:51:49.578040 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:51:49.596033 systemd-logind[1555]: New session 4 of user core. Jul 7 00:51:49.603721 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:51:50.024824 sshd[1695]: Accepted publickey for core from 172.24.4.1 port 36818 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:51:50.027958 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:51:50.037508 systemd-logind[1555]: New session 5 of user core. Jul 7 00:51:50.050218 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:51:50.755566 sshd[1695]: pam_unix(sshd:session): session closed for user core Jul 7 00:51:50.760433 coreos-metadata[1528]: Jul 07 00:51:50.759 WARN failed to locate config-drive, using the metadata service API instead Jul 7 00:51:50.766385 systemd[1]: sshd@2-172.24.4.161:22-172.24.4.1:36818.service: Deactivated successfully. Jul 7 00:51:50.768696 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:51:50.782876 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:51:50.789098 systemd-logind[1555]: Removed session 5. Jul 7 00:51:50.818735 coreos-metadata[1528]: Jul 07 00:51:50.818 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 00:51:51.009230 coreos-metadata[1528]: Jul 07 00:51:51.008 INFO Fetch successful Jul 7 00:51:51.009839 coreos-metadata[1528]: Jul 07 00:51:51.009 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 00:51:51.026953 coreos-metadata[1528]: Jul 07 00:51:51.026 INFO Fetch successful Jul 7 00:51:51.026953 coreos-metadata[1528]: Jul 07 00:51:51.026 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 00:51:51.041041 coreos-metadata[1528]: Jul 07 00:51:51.040 INFO Fetch successful Jul 7 00:51:51.041434 coreos-metadata[1528]: Jul 07 00:51:51.041 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 00:51:51.054575 coreos-metadata[1528]: Jul 07 00:51:51.054 INFO Fetch successful Jul 7 00:51:51.054575 coreos-metadata[1528]: Jul 07 00:51:51.054 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 00:51:51.073040 coreos-metadata[1528]: Jul 07 00:51:51.072 INFO Fetch successful Jul 7 00:51:51.073336 coreos-metadata[1528]: Jul 07 00:51:51.073 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 00:51:51.087568 coreos-metadata[1528]: Jul 07 00:51:51.087 INFO Fetch successful Jul 7 00:51:51.151722 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 00:51:51.155021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:51:51.472132 coreos-metadata[1613]: Jul 07 00:51:51.471 WARN failed to locate config-drive, using the metadata service API instead Jul 7 00:51:51.515464 coreos-metadata[1613]: Jul 07 00:51:51.515 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 00:51:51.531217 coreos-metadata[1613]: Jul 07 00:51:51.531 INFO Fetch successful Jul 7 00:51:51.531217 coreos-metadata[1613]: Jul 07 00:51:51.531 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 00:51:51.545018 coreos-metadata[1613]: Jul 07 00:51:51.544 INFO Fetch successful Jul 7 00:51:51.551287 unknown[1613]: wrote ssh authorized keys file for user: core Jul 7 00:51:51.599701 update-ssh-keys[1745]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:51:51.604796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 00:51:51.617540 systemd[1]: Finished sshkeys.service. Jul 7 00:51:51.624052 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:51:51.624443 systemd[1]: Startup finished in 17.816s (kernel) + 12.529s (userspace) = 30.346s. Jul 7 00:51:58.464475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:51:58.482813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:51:58.971751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:51:59.000128 (kubelet)[1764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:51:59.122086 kubelet[1764]: E0707 00:51:59.121913 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:51:59.128926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:51:59.131147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:52:00.771877 systemd[1]: Started sshd@3-172.24.4.161:22-172.24.4.1:53064.service - OpenSSH per-connection server daemon (172.24.4.1:53064). Jul 7 00:52:02.099918 sshd[1773]: Accepted publickey for core from 172.24.4.1 port 53064 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:02.104607 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:02.121822 systemd-logind[1555]: New session 6 of user core. Jul 7 00:52:02.129227 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:52:02.834208 sshd[1773]: pam_unix(sshd:session): session closed for user core Jul 7 00:52:02.849229 systemd[1]: Started sshd@4-172.24.4.161:22-172.24.4.1:53068.service - OpenSSH per-connection server daemon (172.24.4.1:53068). Jul 7 00:52:02.851182 systemd[1]: sshd@3-172.24.4.161:22-172.24.4.1:53064.service: Deactivated successfully. Jul 7 00:52:02.864957 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:52:02.868610 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:52:02.872520 systemd-logind[1555]: Removed session 6. Jul 7 00:52:04.365730 sshd[1778]: Accepted publickey for core from 172.24.4.1 port 53068 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:04.369716 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:04.383472 systemd-logind[1555]: New session 7 of user core. Jul 7 00:52:04.394776 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:52:05.056994 sshd[1778]: pam_unix(sshd:session): session closed for user core Jul 7 00:52:05.078702 systemd[1]: Started sshd@5-172.24.4.161:22-172.24.4.1:34978.service - OpenSSH per-connection server daemon (172.24.4.1:34978). Jul 7 00:52:05.083295 systemd[1]: sshd@4-172.24.4.161:22-172.24.4.1:53068.service: Deactivated successfully. Jul 7 00:52:05.089782 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:52:05.091710 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:52:05.098104 systemd-logind[1555]: Removed session 7. Jul 7 00:52:06.403663 sshd[1787]: Accepted publickey for core from 172.24.4.1 port 34978 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:06.406759 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:06.419986 systemd-logind[1555]: New session 8 of user core. Jul 7 00:52:06.428648 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:52:07.277114 sshd[1787]: pam_unix(sshd:session): session closed for user core Jul 7 00:52:07.295207 systemd[1]: Started sshd@6-172.24.4.161:22-172.24.4.1:34980.service - OpenSSH per-connection server daemon (172.24.4.1:34980). Jul 7 00:52:07.300532 systemd[1]: sshd@5-172.24.4.161:22-172.24.4.1:34978.service: Deactivated successfully. Jul 7 00:52:07.304815 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:52:07.307759 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:52:07.313764 systemd-logind[1555]: Removed session 8. Jul 7 00:52:08.902747 sshd[1795]: Accepted publickey for core from 172.24.4.1 port 34980 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:08.905888 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:08.916193 systemd-logind[1555]: New session 9 of user core. Jul 7 00:52:08.929149 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:52:09.212589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:52:09.226764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:09.499294 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:52:09.500332 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:52:09.530736 sudo[1805]: pam_unix(sudo:session): session closed for user root Jul 7 00:52:09.639585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:09.644903 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:52:09.711793 kubelet[1815]: E0707 00:52:09.711694 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:52:09.714041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:52:09.714845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:52:09.778388 sshd[1795]: pam_unix(sshd:session): session closed for user core Jul 7 00:52:09.792989 systemd[1]: Started sshd@7-172.24.4.161:22-172.24.4.1:34992.service - OpenSSH per-connection server daemon (172.24.4.1:34992). Jul 7 00:52:09.796959 systemd[1]: sshd@6-172.24.4.161:22-172.24.4.1:34980.service: Deactivated successfully. Jul 7 00:52:09.806873 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:52:09.808084 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:52:09.814065 systemd-logind[1555]: Removed session 9. Jul 7 00:52:10.915301 sshd[1824]: Accepted publickey for core from 172.24.4.1 port 34992 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:10.918739 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:10.930012 systemd-logind[1555]: New session 10 of user core. Jul 7 00:52:10.940471 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:52:11.370769 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:52:11.371524 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:52:11.380586 sudo[1832]: pam_unix(sudo:session): session closed for user root Jul 7 00:52:11.393610 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:52:11.394310 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:52:11.423978 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:52:11.440035 auditctl[1835]: No rules Jul 7 00:52:11.440976 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:52:11.441617 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:52:11.453248 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:52:11.546908 augenrules[1854]: No rules Jul 7 00:52:11.550699 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:52:11.556661 sudo[1831]: pam_unix(sudo:session): session closed for user root Jul 7 00:52:11.822764 sshd[1824]: pam_unix(sshd:session): session closed for user core Jul 7 00:52:11.836031 systemd[1]: Started sshd@8-172.24.4.161:22-172.24.4.1:35006.service - OpenSSH per-connection server daemon (172.24.4.1:35006). Jul 7 00:52:11.837212 systemd[1]: sshd@7-172.24.4.161:22-172.24.4.1:34992.service: Deactivated successfully. Jul 7 00:52:11.846058 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:52:11.846483 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:52:11.856606 systemd-logind[1555]: Removed session 10. Jul 7 00:52:13.317232 sshd[1860]: Accepted publickey for core from 172.24.4.1 port 35006 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:52:13.320305 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:52:13.334986 systemd-logind[1555]: New session 11 of user core. Jul 7 00:52:13.344182 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:52:13.746204 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:52:13.746944 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:52:14.596806 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:52:14.599995 (dockerd)[1884]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:52:15.224098 dockerd[1884]: time="2025-07-07T00:52:15.223761408Z" level=info msg="Starting up" Jul 7 00:52:15.451764 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport478290152-merged.mount: Deactivated successfully. Jul 7 00:52:15.665485 systemd[1]: var-lib-docker-metacopy\x2dcheck33944034-merged.mount: Deactivated successfully. Jul 7 00:52:15.719163 dockerd[1884]: time="2025-07-07T00:52:15.718585250Z" level=info msg="Loading containers: start." Jul 7 00:52:15.936455 kernel: Initializing XFRM netlink socket Jul 7 00:52:16.046934 systemd-networkd[1200]: docker0: Link UP Jul 7 00:52:16.080512 dockerd[1884]: time="2025-07-07T00:52:16.079978242Z" level=info msg="Loading containers: done." Jul 7 00:52:16.109518 dockerd[1884]: time="2025-07-07T00:52:16.109403760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:52:16.109928 dockerd[1884]: time="2025-07-07T00:52:16.109741115Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:52:16.110092 dockerd[1884]: time="2025-07-07T00:52:16.110037334Z" level=info msg="Daemon has completed initialization" Jul 7 00:52:16.237861 dockerd[1884]: time="2025-07-07T00:52:16.237220230Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:52:16.239698 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:52:16.446086 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1873464912-merged.mount: Deactivated successfully. Jul 7 00:52:17.900921 containerd[1579]: time="2025-07-07T00:52:17.900685635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 00:52:18.676309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797735548.mount: Deactivated successfully. Jul 7 00:52:19.962506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 00:52:19.971743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:20.136778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:20.142570 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:52:20.204643 kubelet[2090]: E0707 00:52:20.204084 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:52:20.207601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:52:20.207802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:52:20.635785 containerd[1579]: time="2025-07-07T00:52:20.635656237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:20.637202 containerd[1579]: time="2025-07-07T00:52:20.637144237Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 7 00:52:20.638299 containerd[1579]: time="2025-07-07T00:52:20.638219432Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:20.641880 containerd[1579]: time="2025-07-07T00:52:20.641854405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:20.644241 containerd[1579]: time="2025-07-07T00:52:20.643680874Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.742732132s" Jul 7 00:52:20.644241 containerd[1579]: time="2025-07-07T00:52:20.643809967Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 7 00:52:20.645517 containerd[1579]: time="2025-07-07T00:52:20.645430908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 00:52:22.604644 containerd[1579]: time="2025-07-07T00:52:22.604218437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:22.606392 containerd[1579]: time="2025-07-07T00:52:22.606227969Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 7 00:52:22.607811 containerd[1579]: time="2025-07-07T00:52:22.607688828Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:22.613133 containerd[1579]: time="2025-07-07T00:52:22.613080804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:22.614631 containerd[1579]: time="2025-07-07T00:52:22.614321979Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.968854102s" Jul 7 00:52:22.614631 containerd[1579]: time="2025-07-07T00:52:22.614407140Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 7 00:52:22.615974 containerd[1579]: time="2025-07-07T00:52:22.615688992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 00:52:24.282291 containerd[1579]: time="2025-07-07T00:52:24.282223920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:24.283749 containerd[1579]: time="2025-07-07T00:52:24.283685859Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 7 00:52:24.284729 containerd[1579]: time="2025-07-07T00:52:24.284650685Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:24.288537 containerd[1579]: time="2025-07-07T00:52:24.288481450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:24.290076 containerd[1579]: time="2025-07-07T00:52:24.289895780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.67417079s" Jul 7 00:52:24.290076 containerd[1579]: time="2025-07-07T00:52:24.289944793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 7 00:52:24.291042 containerd[1579]: time="2025-07-07T00:52:24.290877046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 00:52:25.676100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203957346.mount: Deactivated successfully. Jul 7 00:52:26.239682 containerd[1579]: time="2025-07-07T00:52:26.239593748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:26.241077 containerd[1579]: time="2025-07-07T00:52:26.240852535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 7 00:52:26.242028 containerd[1579]: time="2025-07-07T00:52:26.241951280Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:26.244556 containerd[1579]: time="2025-07-07T00:52:26.244507217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:26.246096 containerd[1579]: time="2025-07-07T00:52:26.245310167Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.954400299s" Jul 7 00:52:26.246096 containerd[1579]: time="2025-07-07T00:52:26.245396178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 7 00:52:26.246780 containerd[1579]: time="2025-07-07T00:52:26.246610812Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 00:52:26.891717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192060762.mount: Deactivated successfully. Jul 7 00:52:28.311296 containerd[1579]: time="2025-07-07T00:52:28.311092587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.313897 containerd[1579]: time="2025-07-07T00:52:28.313567238Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 00:52:28.315277 containerd[1579]: time="2025-07-07T00:52:28.315231767Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.318729 containerd[1579]: time="2025-07-07T00:52:28.318651145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.320175 containerd[1579]: time="2025-07-07T00:52:28.319978320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.073325147s" Jul 7 00:52:28.320175 containerd[1579]: time="2025-07-07T00:52:28.320031309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 00:52:28.323797 containerd[1579]: time="2025-07-07T00:52:28.323604916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:52:28.897897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506676113.mount: Deactivated successfully. Jul 7 00:52:28.919495 containerd[1579]: time="2025-07-07T00:52:28.919301541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.921629 containerd[1579]: time="2025-07-07T00:52:28.921528758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 00:52:28.923463 containerd[1579]: time="2025-07-07T00:52:28.923251396Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.929453 containerd[1579]: time="2025-07-07T00:52:28.929246966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:28.931795 containerd[1579]: time="2025-07-07T00:52:28.931148681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.507927ms" Jul 7 00:52:28.931795 containerd[1579]: time="2025-07-07T00:52:28.931220305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:52:28.932397 containerd[1579]: time="2025-07-07T00:52:28.932326635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 00:52:29.295509 update_engine[1562]: I20250707 00:52:29.294185 1562 update_attempter.cc:509] Updating boot flags... Jul 7 00:52:29.394693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2177) Jul 7 00:52:29.502784 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2181) Jul 7 00:52:29.680312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949370884.mount: Deactivated successfully. Jul 7 00:52:30.213987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 00:52:30.235530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:31.051694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:31.059778 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:52:31.136814 kubelet[2224]: E0707 00:52:31.136689 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:52:31.140657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:52:31.140869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:52:33.600454 containerd[1579]: time="2025-07-07T00:52:33.600291189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:33.602264 containerd[1579]: time="2025-07-07T00:52:33.602207909Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 00:52:33.603133 containerd[1579]: time="2025-07-07T00:52:33.603058137Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:33.607212 containerd[1579]: time="2025-07-07T00:52:33.607153210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:33.609070 containerd[1579]: time="2025-07-07T00:52:33.608673085Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.676260207s" Jul 7 00:52:33.609070 containerd[1579]: time="2025-07-07T00:52:33.608722708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 00:52:36.953646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:36.973714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:37.026252 systemd[1]: Reloading requested from client PID 2286 ('systemctl') (unit session-11.scope)... Jul 7 00:52:37.026302 systemd[1]: Reloading... Jul 7 00:52:37.177428 zram_generator::config[2325]: No configuration found. Jul 7 00:52:37.367925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:52:37.458258 systemd[1]: Reloading finished in 431 ms. Jul 7 00:52:37.541743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:37.567266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:37.571197 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:52:37.572906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:37.584657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:37.751711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:37.774230 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:52:37.959578 kubelet[2408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:52:37.959578 kubelet[2408]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:52:37.959578 kubelet[2408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:52:37.960336 kubelet[2408]: I0707 00:52:37.960027 2408 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:52:38.347421 kubelet[2408]: I0707 00:52:38.347194 2408 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:52:38.347421 kubelet[2408]: I0707 00:52:38.347304 2408 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:52:38.349003 kubelet[2408]: I0707 00:52:38.348949 2408 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:52:38.385397 kubelet[2408]: I0707 00:52:38.384261 2408 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:52:38.388566 kubelet[2408]: E0707 00:52:38.388152 2408 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:38.407477 kubelet[2408]: E0707 00:52:38.407425 2408 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:52:38.407630 kubelet[2408]: I0707 00:52:38.407616 2408 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:52:38.414558 kubelet[2408]: I0707 00:52:38.414534 2408 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:52:38.414985 kubelet[2408]: I0707 00:52:38.414970 2408 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:52:38.415897 kubelet[2408]: I0707 00:52:38.415194 2408 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:52:38.415897 kubelet[2408]: I0707 00:52:38.415232 2408 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-7-8dfaddf5bb.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 00:52:38.415897 kubelet[2408]: I0707 00:52:38.415538 2408 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:52:38.415897 kubelet[2408]: I0707 00:52:38.415553 2408 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:52:38.416278 kubelet[2408]: I0707 00:52:38.415744 2408 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:52:38.420003 kubelet[2408]: I0707 00:52:38.419630 2408 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:52:38.420003 kubelet[2408]: I0707 00:52:38.419671 2408 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:52:38.420003 kubelet[2408]: I0707 00:52:38.419739 2408 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:52:38.420003 kubelet[2408]: I0707 00:52:38.419790 2408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:52:38.425804 kubelet[2408]: W0707 00:52:38.425590 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-8dfaddf5bb.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:38.425882 kubelet[2408]: E0707 00:52:38.425839 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-8dfaddf5bb.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:38.428627 kubelet[2408]: W0707 00:52:38.427518 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:38.428627 kubelet[2408]: E0707 00:52:38.427582 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:38.428627 kubelet[2408]: I0707 00:52:38.427690 2408 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:52:38.428627 kubelet[2408]: I0707 00:52:38.428216 2408 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:52:38.428627 kubelet[2408]: W0707 00:52:38.428377 2408 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:52:38.432689 kubelet[2408]: I0707 00:52:38.432652 2408 server.go:1274] "Started kubelet" Jul 7 00:52:38.435822 kubelet[2408]: I0707 00:52:38.435782 2408 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:52:38.438146 kubelet[2408]: I0707 00:52:38.438126 2408 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:52:38.442155 kubelet[2408]: I0707 00:52:38.441871 2408 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:52:38.444490 kubelet[2408]: E0707 00:52:38.442436 2408 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.161:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-7-8dfaddf5bb.novalocal.184fd1e4464c904a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-7-8dfaddf5bb.novalocal,UID:ci-4081-3-4-7-8dfaddf5bb.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-7-8dfaddf5bb.novalocal,},FirstTimestamp:2025-07-07 00:52:38.432583754 +0000 UTC m=+0.635497555,LastTimestamp:2025-07-07 00:52:38.432583754 +0000 UTC m=+0.635497555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-7-8dfaddf5bb.novalocal,}" Jul 7 00:52:38.444789 kubelet[2408]: I0707 00:52:38.444757 2408 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:52:38.448537 kubelet[2408]: I0707 00:52:38.448519 2408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:52:38.457224 kubelet[2408]: I0707 00:52:38.456474 2408 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:52:38.457224 kubelet[2408]: I0707 00:52:38.448773 2408 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:52:38.457224 kubelet[2408]: I0707 00:52:38.456926 2408 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:52:38.457224 kubelet[2408]: I0707 00:52:38.457168 2408 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:52:38.458502 kubelet[2408]: W0707 00:52:38.458450 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:38.458603 kubelet[2408]: E0707 00:52:38.458513 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:38.459900 kubelet[2408]: E0707 00:52:38.459842 2408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-8dfaddf5bb.novalocal\" not found" Jul 7 00:52:38.460523 kubelet[2408]: E0707 00:52:38.460481 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-8dfaddf5bb.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="200ms" Jul 7 00:52:38.460800 kubelet[2408]: E0707 00:52:38.460761 2408 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:52:38.461464 kubelet[2408]: I0707 00:52:38.461441 2408 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:52:38.461594 kubelet[2408]: I0707 00:52:38.461567 2408 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:52:38.464417 kubelet[2408]: I0707 00:52:38.464387 2408 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:52:38.477061 kubelet[2408]: I0707 00:52:38.476922 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:52:38.480460 kubelet[2408]: I0707 00:52:38.480437 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:52:38.488381 kubelet[2408]: I0707 00:52:38.488056 2408 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:52:38.488381 kubelet[2408]: I0707 00:52:38.488236 2408 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:52:38.488561 kubelet[2408]: E0707 00:52:38.488416 2408 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:52:38.493654 kubelet[2408]: W0707 00:52:38.493556 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:38.493960 kubelet[2408]: E0707 00:52:38.493679 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:38.509479 kubelet[2408]: I0707 00:52:38.509441 2408 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:52:38.509479 kubelet[2408]: I0707 00:52:38.509470 2408 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:52:38.509644 kubelet[2408]: I0707 00:52:38.509521 2408 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:52:38.516817 kubelet[2408]: I0707 00:52:38.516779 2408 policy_none.go:49] "None policy: Start" Jul 7 00:52:38.517753 kubelet[2408]: I0707 00:52:38.517730 2408 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:52:38.517809 kubelet[2408]: I0707 00:52:38.517783 2408 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:52:38.527367 kubelet[2408]: I0707 00:52:38.526993 2408 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:52:38.527367 kubelet[2408]: I0707 00:52:38.527242 2408 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:52:38.527367 kubelet[2408]: I0707 00:52:38.527273 2408 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:52:38.529003 kubelet[2408]: I0707 00:52:38.528986 2408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:52:38.531142 kubelet[2408]: E0707 00:52:38.531124 2408 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-7-8dfaddf5bb.novalocal\" not found" Jul 7 00:52:38.636675 kubelet[2408]: I0707 00:52:38.636041 2408 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.639281 kubelet[2408]: E0707 00:52:38.639122 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.661926 kubelet[2408]: E0707 00:52:38.661802 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-8dfaddf5bb.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="400ms" Jul 7 00:52:38.759125 kubelet[2408]: I0707 00:52:38.758460 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759125 kubelet[2408]: I0707 00:52:38.758557 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759125 kubelet[2408]: I0707 00:52:38.758618 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759125 kubelet[2408]: I0707 00:52:38.758666 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759125 kubelet[2408]: I0707 00:52:38.758715 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d863b1772d064b34bcab50024f73659-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"5d863b1772d064b34bcab50024f73659\") " pod="kube-system/kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759811 kubelet[2408]: I0707 00:52:38.758758 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759811 kubelet[2408]: I0707 00:52:38.758804 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759811 kubelet[2408]: I0707 00:52:38.758846 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.759811 kubelet[2408]: I0707 00:52:38.758893 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.842738 kubelet[2408]: I0707 00:52:38.842563 2408 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.843645 kubelet[2408]: E0707 00:52:38.843546 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:38.920057 containerd[1579]: time="2025-07-07T00:52:38.919527035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:71af1a208fd8b2e8ada0b973b3974e53,Namespace:kube-system,Attempt:0,}" Jul 7 00:52:38.924134 containerd[1579]: time="2025-07-07T00:52:38.922929543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:f1565a39f14f48843a73850a6270528b,Namespace:kube-system,Attempt:0,}" Jul 7 00:52:38.924134 containerd[1579]: time="2025-07-07T00:52:38.923626983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:5d863b1772d064b34bcab50024f73659,Namespace:kube-system,Attempt:0,}" Jul 7 00:52:39.063033 kubelet[2408]: E0707 00:52:39.062926 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-8dfaddf5bb.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="800ms" Jul 7 00:52:39.247327 kubelet[2408]: I0707 00:52:39.247247 2408 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:39.248393 kubelet[2408]: E0707 00:52:39.248260 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:39.369294 kubelet[2408]: W0707 00:52:39.368887 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:39.369294 kubelet[2408]: E0707 00:52:39.369036 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:39.374998 kubelet[2408]: W0707 00:52:39.374846 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-8dfaddf5bb.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:39.374998 kubelet[2408]: E0707 00:52:39.374988 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-8dfaddf5bb.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:39.558170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090822107.mount: Deactivated successfully. Jul 7 00:52:39.576080 containerd[1579]: time="2025-07-07T00:52:39.575914984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:52:39.583466 containerd[1579]: time="2025-07-07T00:52:39.583270895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:52:39.594910 containerd[1579]: time="2025-07-07T00:52:39.588108419Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:52:39.602277 containerd[1579]: time="2025-07-07T00:52:39.602159711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 00:52:39.603910 containerd[1579]: time="2025-07-07T00:52:39.603813926Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:52:39.623841 containerd[1579]: time="2025-07-07T00:52:39.623662264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:52:39.627421 containerd[1579]: time="2025-07-07T00:52:39.626736225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:52:39.627421 containerd[1579]: time="2025-07-07T00:52:39.626867291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.082651ms" Jul 7 00:52:39.635103 containerd[1579]: time="2025-07-07T00:52:39.635022543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:52:39.640153 containerd[1579]: time="2025-07-07T00:52:39.639695778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 718.432143ms" Jul 7 00:52:39.680597 containerd[1579]: time="2025-07-07T00:52:39.680451039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 757.272958ms" Jul 7 00:52:39.864822 kubelet[2408]: E0707 00:52:39.864606 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-8dfaddf5bb.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="1.6s" Jul 7 00:52:39.917139 containerd[1579]: time="2025-07-07T00:52:39.916732495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:52:39.917139 containerd[1579]: time="2025-07-07T00:52:39.916795714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:52:39.917139 containerd[1579]: time="2025-07-07T00:52:39.916821453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:39.917139 containerd[1579]: time="2025-07-07T00:52:39.916965112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:39.924781 containerd[1579]: time="2025-07-07T00:52:39.924639211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:52:39.925220 containerd[1579]: time="2025-07-07T00:52:39.924749808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:52:39.925220 containerd[1579]: time="2025-07-07T00:52:39.924803789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:39.925220 containerd[1579]: time="2025-07-07T00:52:39.924985301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:39.928214 containerd[1579]: time="2025-07-07T00:52:39.922288577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:52:39.928333 containerd[1579]: time="2025-07-07T00:52:39.928206198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:52:39.928333 containerd[1579]: time="2025-07-07T00:52:39.928226315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:39.928483 containerd[1579]: time="2025-07-07T00:52:39.928330432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:40.002835 kubelet[2408]: W0707 00:52:40.002753 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:40.005423 kubelet[2408]: E0707 00:52:40.005395 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:40.021458 kubelet[2408]: W0707 00:52:40.021283 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 7 00:52:40.021710 kubelet[2408]: E0707 00:52:40.021683 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 7 00:52:40.035844 containerd[1579]: time="2025-07-07T00:52:40.035785773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:71af1a208fd8b2e8ada0b973b3974e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"611837bb167ad87c5feceb9e4a059297f2ea5d18ad1d20b462ddd0cc47209e8b\"" Jul 7 00:52:40.044449 containerd[1579]: time="2025-07-07T00:52:40.044206572Z" level=info msg="CreateContainer within sandbox \"611837bb167ad87c5feceb9e4a059297f2ea5d18ad1d20b462ddd0cc47209e8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:52:40.057467 containerd[1579]: time="2025-07-07T00:52:40.057369545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:f1565a39f14f48843a73850a6270528b,Namespace:kube-system,Attempt:0,} returns sandbox id \"011eaed954154d2cec5c74fb466243b2b72b5f26b91998b28bd2292e21746fc4\"" Jul 7 00:52:40.057947 kubelet[2408]: I0707 00:52:40.057715 2408 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:40.058162 kubelet[2408]: E0707 00:52:40.058126 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:40.062662 containerd[1579]: time="2025-07-07T00:52:40.062436398Z" level=info msg="CreateContainer within sandbox \"011eaed954154d2cec5c74fb466243b2b72b5f26b91998b28bd2292e21746fc4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:52:40.072159 containerd[1579]: time="2025-07-07T00:52:40.071994682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal,Uid:5d863b1772d064b34bcab50024f73659,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ac608acb7045d5d1ec660495a9f6a294ed7d1fcac139b0cf8de0f0968e003d1\"" Jul 7 00:52:40.076815 containerd[1579]: time="2025-07-07T00:52:40.076756714Z" level=info msg="CreateContainer within sandbox \"1ac608acb7045d5d1ec660495a9f6a294ed7d1fcac139b0cf8de0f0968e003d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:52:40.091828 containerd[1579]: time="2025-07-07T00:52:40.091603938Z" level=info msg="CreateContainer within sandbox \"611837bb167ad87c5feceb9e4a059297f2ea5d18ad1d20b462ddd0cc47209e8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7\"" Jul 7 00:52:40.093020 containerd[1579]: time="2025-07-07T00:52:40.092732387Z" level=info msg="StartContainer for \"e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7\"" Jul 7 00:52:40.115495 containerd[1579]: time="2025-07-07T00:52:40.115301760Z" level=info msg="CreateContainer within sandbox \"011eaed954154d2cec5c74fb466243b2b72b5f26b91998b28bd2292e21746fc4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1224f640510f5f1db1fdf860f473db041aa1b8e61c59a813b9936767f8b3bc1\"" Jul 7 00:52:40.116594 containerd[1579]: time="2025-07-07T00:52:40.116277391Z" level=info msg="StartContainer for \"b1224f640510f5f1db1fdf860f473db041aa1b8e61c59a813b9936767f8b3bc1\"" Jul 7 00:52:40.128061 containerd[1579]: time="2025-07-07T00:52:40.127878541Z" level=info msg="CreateContainer within sandbox \"1ac608acb7045d5d1ec660495a9f6a294ed7d1fcac139b0cf8de0f0968e003d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6\"" Jul 7 00:52:40.129579 containerd[1579]: time="2025-07-07T00:52:40.129418813Z" level=info msg="StartContainer for \"0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6\"" Jul 7 00:52:40.193158 containerd[1579]: time="2025-07-07T00:52:40.193050628Z" level=info msg="StartContainer for \"e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7\" returns successfully" Jul 7 00:52:40.303142 containerd[1579]: time="2025-07-07T00:52:40.302819403Z" level=info msg="StartContainer for \"b1224f640510f5f1db1fdf860f473db041aa1b8e61c59a813b9936767f8b3bc1\" returns successfully" Jul 7 00:52:40.328610 containerd[1579]: time="2025-07-07T00:52:40.328530846Z" level=info msg="StartContainer for \"0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6\" returns successfully" Jul 7 00:52:41.664716 kubelet[2408]: I0707 00:52:41.661667 2408 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:42.366223 kubelet[2408]: I0707 00:52:42.365742 2408 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:42.429236 kubelet[2408]: I0707 00:52:42.429171 2408 apiserver.go:52] "Watching apiserver" Jul 7 00:52:42.457185 kubelet[2408]: I0707 00:52:42.457127 2408 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:52:43.766942 kubelet[2408]: W0707 00:52:43.766811 2408 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:52:45.167309 systemd[1]: Reloading requested from client PID 2680 ('systemctl') (unit session-11.scope)... Jul 7 00:52:45.169489 systemd[1]: Reloading... Jul 7 00:52:45.283520 zram_generator::config[2715]: No configuration found. Jul 7 00:52:45.472098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:52:45.584495 systemd[1]: Reloading finished in 413 ms. Jul 7 00:52:45.640767 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:45.642296 kubelet[2408]: I0707 00:52:45.641873 2408 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:52:45.671105 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:52:45.672676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:45.679249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:52:46.149561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:52:46.164057 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:52:46.294045 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:52:46.294045 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 00:52:46.294045 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:52:46.294968 kubelet[2793]: I0707 00:52:46.294301 2793 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:52:46.314957 kubelet[2793]: I0707 00:52:46.314891 2793 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 00:52:46.315501 kubelet[2793]: I0707 00:52:46.315150 2793 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:52:46.315759 kubelet[2793]: I0707 00:52:46.315743 2793 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 00:52:46.317737 kubelet[2793]: I0707 00:52:46.317718 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 00:52:46.323985 kubelet[2793]: I0707 00:52:46.323957 2793 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:52:46.339380 kubelet[2793]: E0707 00:52:46.339320 2793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:52:46.340280 kubelet[2793]: I0707 00:52:46.339646 2793 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:52:46.345124 kubelet[2793]: I0707 00:52:46.345105 2793 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:52:46.345836 kubelet[2793]: I0707 00:52:46.345736 2793 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 00:52:46.346051 kubelet[2793]: I0707 00:52:46.345903 2793 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:52:46.346225 kubelet[2793]: I0707 00:52:46.345937 2793 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-7-8dfaddf5bb.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 00:52:46.346566 kubelet[2793]: I0707 00:52:46.346248 2793 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:52:46.346566 kubelet[2793]: I0707 00:52:46.346262 2793 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 00:52:46.346566 kubelet[2793]: I0707 00:52:46.346374 2793 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:52:46.347443 kubelet[2793]: I0707 00:52:46.347416 2793 kubelet.go:408] "Attempting to sync node with API server" Jul 7 00:52:46.347499 kubelet[2793]: I0707 00:52:46.347451 2793 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:52:46.347552 kubelet[2793]: I0707 00:52:46.347523 2793 kubelet.go:314] "Adding apiserver pod source" Jul 7 00:52:46.347599 kubelet[2793]: I0707 00:52:46.347580 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:52:46.363809 kubelet[2793]: I0707 00:52:46.363523 2793 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:52:46.364655 kubelet[2793]: I0707 00:52:46.364120 2793 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 00:52:46.364655 kubelet[2793]: I0707 00:52:46.364507 2793 apiserver.go:52] "Watching apiserver" Jul 7 00:52:46.367319 kubelet[2793]: I0707 00:52:46.366491 2793 server.go:1274] "Started kubelet" Jul 7 00:52:46.367407 kubelet[2793]: I0707 00:52:46.367375 2793 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:52:46.368909 kubelet[2793]: I0707 00:52:46.368868 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:52:46.371719 kubelet[2793]: I0707 00:52:46.371692 2793 server.go:449] "Adding debug handlers to kubelet server" Jul 7 00:52:46.379660 kubelet[2793]: I0707 00:52:46.377706 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:52:46.383629 kubelet[2793]: I0707 00:52:46.383475 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:52:46.384784 kubelet[2793]: I0707 00:52:46.384756 2793 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:52:46.385148 kubelet[2793]: I0707 00:52:46.385110 2793 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 00:52:46.388233 kubelet[2793]: I0707 00:52:46.388212 2793 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 00:52:46.388597 kubelet[2793]: I0707 00:52:46.388583 2793 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:52:46.398743 kubelet[2793]: I0707 00:52:46.398127 2793 factory.go:221] Registration of the systemd container factory successfully Jul 7 00:52:46.398743 kubelet[2793]: I0707 00:52:46.398261 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:52:46.407297 kubelet[2793]: I0707 00:52:46.405661 2793 factory.go:221] Registration of the containerd container factory successfully Jul 7 00:52:46.411438 kubelet[2793]: I0707 00:52:46.411384 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 00:52:46.412602 kubelet[2793]: I0707 00:52:46.412574 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 00:52:46.412699 kubelet[2793]: I0707 00:52:46.412654 2793 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 00:52:46.412760 kubelet[2793]: I0707 00:52:46.412724 2793 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 00:52:46.412816 kubelet[2793]: E0707 00:52:46.412783 2793 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:52:46.437702 kubelet[2793]: E0707 00:52:46.437670 2793 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:52:46.513632 kubelet[2793]: E0707 00:52:46.513525 2793 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:52:46.539696 kubelet[2793]: I0707 00:52:46.539435 2793 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 00:52:46.539696 kubelet[2793]: I0707 00:52:46.539493 2793 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 00:52:46.539696 kubelet[2793]: I0707 00:52:46.539598 2793 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:52:46.541214 kubelet[2793]: I0707 00:52:46.540652 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:52:46.541214 kubelet[2793]: I0707 00:52:46.541006 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:52:46.541214 kubelet[2793]: I0707 00:52:46.541125 2793 policy_none.go:49] "None policy: Start" Jul 7 00:52:46.544014 kubelet[2793]: I0707 00:52:46.543977 2793 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 00:52:46.544480 kubelet[2793]: I0707 00:52:46.544398 2793 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:52:46.545238 kubelet[2793]: I0707 00:52:46.545043 2793 state_mem.go:75] "Updated machine memory state" Jul 7 00:52:46.555165 kubelet[2793]: I0707 00:52:46.554477 2793 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 00:52:46.556402 kubelet[2793]: I0707 00:52:46.556381 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:52:46.556761 kubelet[2793]: I0707 00:52:46.556591 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:52:46.560718 kubelet[2793]: I0707 00:52:46.560678 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:52:46.798948 kubelet[2793]: W0707 00:52:46.798189 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:52:46.798948 kubelet[2793]: I0707 00:52:46.798866 2793 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 00:52:46.799312 kubelet[2793]: W0707 00:52:46.799033 2793 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 00:52:46.805048 kubelet[2793]: I0707 00:52:46.804726 2793 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.823800 kubelet[2793]: I0707 00:52:46.823194 2793 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.823800 kubelet[2793]: I0707 00:52:46.823335 2793 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.851154 kubelet[2793]: I0707 00:52:46.850865 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" podStartSLOduration=3.850802783 podStartE2EDuration="3.850802783s" podCreationTimestamp="2025-07-07 00:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:52:46.832303032 +0000 UTC m=+0.642981773" watchObservedRunningTime="2025-07-07 00:52:46.850802783 +0000 UTC m=+0.661481514" Jul 7 00:52:46.862403 kubelet[2793]: I0707 00:52:46.861938 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" podStartSLOduration=0.861910921 podStartE2EDuration="861.910921ms" podCreationTimestamp="2025-07-07 00:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:52:46.85167555 +0000 UTC m=+0.662354291" watchObservedRunningTime="2025-07-07 00:52:46.861910921 +0000 UTC m=+0.672589662" Jul 7 00:52:46.893592 kubelet[2793]: I0707 00:52:46.893179 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893592 kubelet[2793]: I0707 00:52:46.893235 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d863b1772d064b34bcab50024f73659-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"5d863b1772d064b34bcab50024f73659\") " pod="kube-system/kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893592 kubelet[2793]: I0707 00:52:46.893263 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893592 kubelet[2793]: I0707 00:52:46.893283 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893592 kubelet[2793]: I0707 00:52:46.893305 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1565a39f14f48843a73850a6270528b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"f1565a39f14f48843a73850a6270528b\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893952 kubelet[2793]: I0707 00:52:46.893325 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893952 kubelet[2793]: I0707 00:52:46.893364 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893952 kubelet[2793]: I0707 00:52:46.893387 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:46.893952 kubelet[2793]: I0707 00:52:46.893418 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71af1a208fd8b2e8ada0b973b3974e53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal\" (UID: \"71af1a208fd8b2e8ada0b973b3974e53\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:52:47.062881 kubelet[2793]: I0707 00:52:47.062259 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal" podStartSLOduration=1.062236892 podStartE2EDuration="1.062236892s" podCreationTimestamp="2025-07-07 00:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:52:46.862234327 +0000 UTC m=+0.672913068" watchObservedRunningTime="2025-07-07 00:52:47.062236892 +0000 UTC m=+0.872915623" Jul 7 00:52:50.683697 kubelet[2793]: I0707 00:52:50.683598 2793 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:52:50.685114 containerd[1579]: time="2025-07-07T00:52:50.684741852Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:52:50.688423 kubelet[2793]: I0707 00:52:50.685751 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:52:51.629074 kubelet[2793]: I0707 00:52:51.628823 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a76c9a5-e302-4db6-a2b1-b2fabe1f9094-kube-proxy\") pod \"kube-proxy-xx2x8\" (UID: \"4a76c9a5-e302-4db6-a2b1-b2fabe1f9094\") " pod="kube-system/kube-proxy-xx2x8" Jul 7 00:52:51.630138 kubelet[2793]: I0707 00:52:51.629816 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpdxm\" (UniqueName: \"kubernetes.io/projected/4a76c9a5-e302-4db6-a2b1-b2fabe1f9094-kube-api-access-hpdxm\") pod \"kube-proxy-xx2x8\" (UID: \"4a76c9a5-e302-4db6-a2b1-b2fabe1f9094\") " pod="kube-system/kube-proxy-xx2x8" Jul 7 00:52:51.630319 kubelet[2793]: I0707 00:52:51.630170 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a76c9a5-e302-4db6-a2b1-b2fabe1f9094-xtables-lock\") pod \"kube-proxy-xx2x8\" (UID: \"4a76c9a5-e302-4db6-a2b1-b2fabe1f9094\") " pod="kube-system/kube-proxy-xx2x8" Jul 7 00:52:51.630319 kubelet[2793]: I0707 00:52:51.630226 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a76c9a5-e302-4db6-a2b1-b2fabe1f9094-lib-modules\") pod \"kube-proxy-xx2x8\" (UID: \"4a76c9a5-e302-4db6-a2b1-b2fabe1f9094\") " pod="kube-system/kube-proxy-xx2x8" Jul 7 00:52:51.824867 containerd[1579]: time="2025-07-07T00:52:51.824804531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx2x8,Uid:4a76c9a5-e302-4db6-a2b1-b2fabe1f9094,Namespace:kube-system,Attempt:0,}" Jul 7 00:52:51.872110 containerd[1579]: time="2025-07-07T00:52:51.871937464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:52:51.872110 containerd[1579]: time="2025-07-07T00:52:51.872052480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:52:51.872110 containerd[1579]: time="2025-07-07T00:52:51.872069722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:51.872681 containerd[1579]: time="2025-07-07T00:52:51.872535435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:51.932183 kubelet[2793]: I0707 00:52:51.932012 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp6xt\" (UniqueName: \"kubernetes.io/projected/a7951160-269f-4fdc-8038-6854e7964393-kube-api-access-dp6xt\") pod \"tigera-operator-5bf8dfcb4-bzlgb\" (UID: \"a7951160-269f-4fdc-8038-6854e7964393\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-bzlgb" Jul 7 00:52:51.933482 kubelet[2793]: I0707 00:52:51.933341 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7951160-269f-4fdc-8038-6854e7964393-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-bzlgb\" (UID: \"a7951160-269f-4fdc-8038-6854e7964393\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-bzlgb" Jul 7 00:52:51.935016 containerd[1579]: time="2025-07-07T00:52:51.934967847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xx2x8,Uid:4a76c9a5-e302-4db6-a2b1-b2fabe1f9094,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f8d1062c7190941c4dc6dee92172467a516cf09341ab47836fc0f8cc1103a29\"" Jul 7 00:52:51.940330 containerd[1579]: time="2025-07-07T00:52:51.940258655Z" level=info msg="CreateContainer within sandbox \"0f8d1062c7190941c4dc6dee92172467a516cf09341ab47836fc0f8cc1103a29\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:52:51.969367 containerd[1579]: time="2025-07-07T00:52:51.969150640Z" level=info msg="CreateContainer within sandbox \"0f8d1062c7190941c4dc6dee92172467a516cf09341ab47836fc0f8cc1103a29\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1a7df24de9a3394ecb59c3d5c6402ce8b6a99d4ac1ee150a49e28ca4b7bdf42\"" Jul 7 00:52:51.970319 containerd[1579]: time="2025-07-07T00:52:51.970224896Z" level=info msg="StartContainer for \"f1a7df24de9a3394ecb59c3d5c6402ce8b6a99d4ac1ee150a49e28ca4b7bdf42\"" Jul 7 00:52:52.070424 containerd[1579]: time="2025-07-07T00:52:52.069214221Z" level=info msg="StartContainer for \"f1a7df24de9a3394ecb59c3d5c6402ce8b6a99d4ac1ee150a49e28ca4b7bdf42\" returns successfully" Jul 7 00:52:52.112196 containerd[1579]: time="2025-07-07T00:52:52.112104480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-bzlgb,Uid:a7951160-269f-4fdc-8038-6854e7964393,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:52:52.175821 containerd[1579]: time="2025-07-07T00:52:52.175244125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:52:52.175821 containerd[1579]: time="2025-07-07T00:52:52.175332220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:52:52.175821 containerd[1579]: time="2025-07-07T00:52:52.175376533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:52.175821 containerd[1579]: time="2025-07-07T00:52:52.175487852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:52:52.306133 containerd[1579]: time="2025-07-07T00:52:52.305987101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-bzlgb,Uid:a7951160-269f-4fdc-8038-6854e7964393,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\"" Jul 7 00:52:52.309927 containerd[1579]: time="2025-07-07T00:52:52.309838197Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:52:52.524652 kubelet[2793]: I0707 00:52:52.524537 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xx2x8" podStartSLOduration=1.524514485 podStartE2EDuration="1.524514485s" podCreationTimestamp="2025-07-07 00:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:52:52.524259136 +0000 UTC m=+6.334937877" watchObservedRunningTime="2025-07-07 00:52:52.524514485 +0000 UTC m=+6.335193216" Jul 7 00:52:52.789783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455347496.mount: Deactivated successfully. Jul 7 00:52:53.876928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159417991.mount: Deactivated successfully. Jul 7 00:52:55.112422 containerd[1579]: time="2025-07-07T00:52:55.110920838Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:55.115322 containerd[1579]: time="2025-07-07T00:52:55.115225134Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:52:55.117255 containerd[1579]: time="2025-07-07T00:52:55.117189259Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:55.121284 containerd[1579]: time="2025-07-07T00:52:55.121210433Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:52:55.123245 containerd[1579]: time="2025-07-07T00:52:55.123168848Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.813234612s" Jul 7 00:52:55.123245 containerd[1579]: time="2025-07-07T00:52:55.123243378Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:52:55.133194 containerd[1579]: time="2025-07-07T00:52:55.133111793Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:52:55.162754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751992706.mount: Deactivated successfully. Jul 7 00:52:55.171616 containerd[1579]: time="2025-07-07T00:52:55.171527130Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0\"" Jul 7 00:52:55.172400 containerd[1579]: time="2025-07-07T00:52:55.172338662Z" level=info msg="StartContainer for \"e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0\"" Jul 7 00:52:55.229113 systemd[1]: run-containerd-runc-k8s.io-e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0-runc.A29SJ8.mount: Deactivated successfully. Jul 7 00:52:55.277817 containerd[1579]: time="2025-07-07T00:52:55.277748426Z" level=info msg="StartContainer for \"e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0\" returns successfully" Jul 7 00:52:55.556429 kubelet[2793]: I0707 00:52:55.555901 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-bzlgb" podStartSLOduration=1.736842719 podStartE2EDuration="4.555465981s" podCreationTimestamp="2025-07-07 00:52:51 +0000 UTC" firstStartedPulling="2025-07-07 00:52:52.307862079 +0000 UTC m=+6.118540810" lastFinishedPulling="2025-07-07 00:52:55.12648529 +0000 UTC m=+8.937164072" observedRunningTime="2025-07-07 00:52:55.554900951 +0000 UTC m=+9.365579732" watchObservedRunningTime="2025-07-07 00:52:55.555465981 +0000 UTC m=+9.366144832" Jul 7 00:53:03.261263 sudo[1867]: pam_unix(sudo:session): session closed for user root Jul 7 00:53:03.545994 sshd[1860]: pam_unix(sshd:session): session closed for user core Jul 7 00:53:03.574662 systemd[1]: sshd@8-172.24.4.161:22-172.24.4.1:35006.service: Deactivated successfully. Jul 7 00:53:03.584907 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:53:03.587548 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:53:03.594196 systemd-logind[1555]: Removed session 11. Jul 7 00:53:08.364374 kubelet[2793]: I0707 00:53:08.363014 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ef4e42-da8c-492d-95cf-3ff677a568bb-tigera-ca-bundle\") pod \"calico-typha-58f9d9c67d-cbfbx\" (UID: \"34ef4e42-da8c-492d-95cf-3ff677a568bb\") " pod="calico-system/calico-typha-58f9d9c67d-cbfbx" Jul 7 00:53:08.364374 kubelet[2793]: I0707 00:53:08.363165 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/34ef4e42-da8c-492d-95cf-3ff677a568bb-typha-certs\") pod \"calico-typha-58f9d9c67d-cbfbx\" (UID: \"34ef4e42-da8c-492d-95cf-3ff677a568bb\") " pod="calico-system/calico-typha-58f9d9c67d-cbfbx" Jul 7 00:53:08.364374 kubelet[2793]: I0707 00:53:08.363268 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7kb\" (UniqueName: \"kubernetes.io/projected/34ef4e42-da8c-492d-95cf-3ff677a568bb-kube-api-access-gc7kb\") pod \"calico-typha-58f9d9c67d-cbfbx\" (UID: \"34ef4e42-da8c-492d-95cf-3ff677a568bb\") " pod="calico-system/calico-typha-58f9d9c67d-cbfbx" Jul 7 00:53:08.626135 containerd[1579]: time="2025-07-07T00:53:08.624462799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f9d9c67d-cbfbx,Uid:34ef4e42-da8c-492d-95cf-3ff677a568bb,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:08.668458 kubelet[2793]: I0707 00:53:08.666037 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-policysync\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669217 kubelet[2793]: I0707 00:53:08.668438 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-xtables-lock\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669217 kubelet[2793]: I0707 00:53:08.668516 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-cni-log-dir\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669217 kubelet[2793]: I0707 00:53:08.668619 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-lib-modules\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669217 kubelet[2793]: I0707 00:53:08.668702 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-var-run-calico\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669217 kubelet[2793]: I0707 00:53:08.668736 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-cni-bin-dir\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669444 kubelet[2793]: I0707 00:53:08.668812 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-cni-net-dir\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669444 kubelet[2793]: I0707 00:53:08.668947 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzl56\" (UniqueName: \"kubernetes.io/projected/998702a2-fe40-4a18-b025-813227c8e741-kube-api-access-rzl56\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669444 kubelet[2793]: I0707 00:53:08.669046 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/998702a2-fe40-4a18-b025-813227c8e741-tigera-ca-bundle\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.669444 kubelet[2793]: I0707 00:53:08.669149 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-var-lib-calico\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.671838 kubelet[2793]: I0707 00:53:08.669230 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/998702a2-fe40-4a18-b025-813227c8e741-flexvol-driver-host\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.671838 kubelet[2793]: I0707 00:53:08.671380 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/998702a2-fe40-4a18-b025-813227c8e741-node-certs\") pod \"calico-node-vqpnj\" (UID: \"998702a2-fe40-4a18-b025-813227c8e741\") " pod="calico-system/calico-node-vqpnj" Jul 7 00:53:08.742412 containerd[1579]: time="2025-07-07T00:53:08.739420810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:08.742412 containerd[1579]: time="2025-07-07T00:53:08.741486014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:08.742412 containerd[1579]: time="2025-07-07T00:53:08.741505440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:08.743090 containerd[1579]: time="2025-07-07T00:53:08.741680429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:08.807274 kubelet[2793]: E0707 00:53:08.806693 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.807687 kubelet[2793]: W0707 00:53:08.807535 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.808143 kubelet[2793]: E0707 00:53:08.808091 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.818552 kubelet[2793]: E0707 00:53:08.817466 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.818552 kubelet[2793]: W0707 00:53:08.817508 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.818552 kubelet[2793]: E0707 00:53:08.817547 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.879308 containerd[1579]: time="2025-07-07T00:53:08.879131432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58f9d9c67d-cbfbx,Uid:34ef4e42-da8c-492d-95cf-3ff677a568bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"82bad23255414f4fcf50c88dc56c7a812e08ca02139fc9c5d5fb7e806eb977b2\"" Jul 7 00:53:08.890250 containerd[1579]: time="2025-07-07T00:53:08.888678090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:53:08.929279 kubelet[2793]: E0707 00:53:08.929054 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:08.948112 containerd[1579]: time="2025-07-07T00:53:08.947836951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqpnj,Uid:998702a2-fe40-4a18-b025-813227c8e741,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:08.976241 kubelet[2793]: E0707 00:53:08.975851 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.976241 kubelet[2793]: W0707 00:53:08.975877 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.976241 kubelet[2793]: E0707 00:53:08.975914 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.976712 kubelet[2793]: E0707 00:53:08.976609 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.976712 kubelet[2793]: W0707 00:53:08.976643 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.976712 kubelet[2793]: E0707 00:53:08.976687 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.977533 kubelet[2793]: E0707 00:53:08.977341 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.977533 kubelet[2793]: W0707 00:53:08.977388 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.977533 kubelet[2793]: E0707 00:53:08.977403 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.978144 kubelet[2793]: E0707 00:53:08.977971 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.978144 kubelet[2793]: W0707 00:53:08.977986 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.978144 kubelet[2793]: E0707 00:53:08.978011 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.979111 kubelet[2793]: E0707 00:53:08.978850 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.979111 kubelet[2793]: W0707 00:53:08.978864 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.979111 kubelet[2793]: E0707 00:53:08.978876 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.979817 kubelet[2793]: E0707 00:53:08.979528 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.979817 kubelet[2793]: W0707 00:53:08.979574 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.979817 kubelet[2793]: E0707 00:53:08.979588 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.983046 kubelet[2793]: E0707 00:53:08.981822 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.983046 kubelet[2793]: W0707 00:53:08.981839 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.983046 kubelet[2793]: E0707 00:53:08.981863 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.985509 kubelet[2793]: E0707 00:53:08.984793 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.985509 kubelet[2793]: W0707 00:53:08.984811 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.985509 kubelet[2793]: E0707 00:53:08.984848 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.988595 kubelet[2793]: E0707 00:53:08.987821 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.988595 kubelet[2793]: W0707 00:53:08.987968 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.988595 kubelet[2793]: E0707 00:53:08.988049 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.990108 kubelet[2793]: E0707 00:53:08.989477 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.990108 kubelet[2793]: W0707 00:53:08.989918 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.990108 kubelet[2793]: E0707 00:53:08.989948 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.991672 kubelet[2793]: E0707 00:53:08.991072 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.991672 kubelet[2793]: W0707 00:53:08.991101 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.991672 kubelet[2793]: E0707 00:53:08.991117 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.994159 kubelet[2793]: E0707 00:53:08.992529 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.994159 kubelet[2793]: W0707 00:53:08.992543 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.994159 kubelet[2793]: E0707 00:53:08.992555 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.994159 kubelet[2793]: E0707 00:53:08.993649 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.994159 kubelet[2793]: W0707 00:53:08.993679 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.994159 kubelet[2793]: E0707 00:53:08.993784 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.996827 kubelet[2793]: E0707 00:53:08.995287 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.996827 kubelet[2793]: W0707 00:53:08.995329 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.996827 kubelet[2793]: E0707 00:53:08.995476 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:08.997981 kubelet[2793]: E0707 00:53:08.997294 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:08.997981 kubelet[2793]: W0707 00:53:08.997335 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:08.997981 kubelet[2793]: E0707 00:53:08.997796 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.000160 kubelet[2793]: E0707 00:53:08.999802 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.000160 kubelet[2793]: W0707 00:53:08.999815 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.000160 kubelet[2793]: E0707 00:53:08.999828 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.001323 kubelet[2793]: E0707 00:53:09.001184 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.001786 kubelet[2793]: W0707 00:53:09.001652 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.002497 kubelet[2793]: E0707 00:53:09.002023 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.003217 kubelet[2793]: E0707 00:53:09.002992 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.003217 kubelet[2793]: W0707 00:53:09.003007 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.003217 kubelet[2793]: E0707 00:53:09.003020 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.003828 kubelet[2793]: E0707 00:53:09.003703 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.003828 kubelet[2793]: W0707 00:53:09.003744 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.003828 kubelet[2793]: E0707 00:53:09.003772 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.004554 kubelet[2793]: E0707 00:53:09.004447 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.004554 kubelet[2793]: W0707 00:53:09.004458 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.004990 kubelet[2793]: E0707 00:53:09.004469 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.005645 kubelet[2793]: E0707 00:53:09.005571 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.005865 kubelet[2793]: W0707 00:53:09.005732 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.006081 kubelet[2793]: E0707 00:53:09.005778 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.006558 kubelet[2793]: I0707 00:53:09.006512 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c53a8470-3943-407f-8401-5976894cd214-registration-dir\") pod \"csi-node-driver-zql2q\" (UID: \"c53a8470-3943-407f-8401-5976894cd214\") " pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:09.007309 kubelet[2793]: E0707 00:53:09.007297 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.007529 kubelet[2793]: W0707 00:53:09.007396 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.007900 kubelet[2793]: E0707 00:53:09.007606 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.009061 kubelet[2793]: E0707 00:53:09.008905 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.009061 kubelet[2793]: W0707 00:53:09.008949 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.009061 kubelet[2793]: E0707 00:53:09.008990 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.009061 kubelet[2793]: I0707 00:53:09.009012 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c53a8470-3943-407f-8401-5976894cd214-socket-dir\") pod \"csi-node-driver-zql2q\" (UID: \"c53a8470-3943-407f-8401-5976894cd214\") " pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:09.010247 kubelet[2793]: E0707 00:53:09.009874 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.010247 kubelet[2793]: W0707 00:53:09.009957 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.010247 kubelet[2793]: E0707 00:53:09.009975 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.011059 kubelet[2793]: E0707 00:53:09.010902 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.011059 kubelet[2793]: W0707 00:53:09.010974 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.011059 kubelet[2793]: E0707 00:53:09.010997 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.011675 kubelet[2793]: E0707 00:53:09.011519 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.011675 kubelet[2793]: W0707 00:53:09.011539 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.011675 kubelet[2793]: E0707 00:53:09.011641 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.012121 kubelet[2793]: E0707 00:53:09.011915 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.012121 kubelet[2793]: W0707 00:53:09.011942 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.012121 kubelet[2793]: E0707 00:53:09.011964 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.012121 kubelet[2793]: I0707 00:53:09.012018 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c53a8470-3943-407f-8401-5976894cd214-varrun\") pod \"csi-node-driver-zql2q\" (UID: \"c53a8470-3943-407f-8401-5976894cd214\") " pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:09.012841 kubelet[2793]: E0707 00:53:09.012614 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.012841 kubelet[2793]: W0707 00:53:09.012667 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.012841 kubelet[2793]: E0707 00:53:09.012695 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.012841 kubelet[2793]: I0707 00:53:09.012792 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53a8470-3943-407f-8401-5976894cd214-kubelet-dir\") pod \"csi-node-driver-zql2q\" (UID: \"c53a8470-3943-407f-8401-5976894cd214\") " pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:09.013669 kubelet[2793]: E0707 00:53:09.013436 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.013669 kubelet[2793]: W0707 00:53:09.013450 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.013958 kubelet[2793]: E0707 00:53:09.013867 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.013958 kubelet[2793]: E0707 00:53:09.013926 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.013958 kubelet[2793]: W0707 00:53:09.013942 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.014294 kubelet[2793]: E0707 00:53:09.014160 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.014294 kubelet[2793]: I0707 00:53:09.014254 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljvs\" (UniqueName: \"kubernetes.io/projected/c53a8470-3943-407f-8401-5976894cd214-kube-api-access-lljvs\") pod \"csi-node-driver-zql2q\" (UID: \"c53a8470-3943-407f-8401-5976894cd214\") " pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:09.014741 kubelet[2793]: E0707 00:53:09.014729 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.015003 kubelet[2793]: W0707 00:53:09.014899 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.015003 kubelet[2793]: E0707 00:53:09.014928 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.016383 kubelet[2793]: E0707 00:53:09.016066 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.016383 kubelet[2793]: W0707 00:53:09.016080 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.016383 kubelet[2793]: E0707 00:53:09.016092 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.017692 kubelet[2793]: E0707 00:53:09.017428 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.017692 kubelet[2793]: W0707 00:53:09.017441 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.017692 kubelet[2793]: E0707 00:53:09.017461 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.018068 kubelet[2793]: E0707 00:53:09.018056 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.018173 kubelet[2793]: W0707 00:53:09.018125 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.018173 kubelet[2793]: E0707 00:53:09.018140 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.019005 kubelet[2793]: E0707 00:53:09.018849 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.019005 kubelet[2793]: W0707 00:53:09.018863 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.019005 kubelet[2793]: E0707 00:53:09.018875 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.110517 containerd[1579]: time="2025-07-07T00:53:09.109263135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:09.110517 containerd[1579]: time="2025-07-07T00:53:09.109945305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:09.110517 containerd[1579]: time="2025-07-07T00:53:09.109972526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:09.112284 containerd[1579]: time="2025-07-07T00:53:09.110617446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:09.121313 kubelet[2793]: E0707 00:53:09.121257 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.121313 kubelet[2793]: W0707 00:53:09.121299 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.122168 kubelet[2793]: E0707 00:53:09.121362 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.122884 kubelet[2793]: E0707 00:53:09.122331 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.122884 kubelet[2793]: W0707 00:53:09.122369 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.122884 kubelet[2793]: E0707 00:53:09.122391 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.122884 kubelet[2793]: E0707 00:53:09.122860 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.122884 kubelet[2793]: W0707 00:53:09.122871 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.122884 kubelet[2793]: E0707 00:53:09.122884 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.124077 kubelet[2793]: E0707 00:53:09.123404 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.124077 kubelet[2793]: W0707 00:53:09.123419 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.124077 kubelet[2793]: E0707 00:53:09.123430 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.125362 kubelet[2793]: E0707 00:53:09.125030 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.126029 kubelet[2793]: W0707 00:53:09.125459 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.126029 kubelet[2793]: E0707 00:53:09.125501 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.127484 kubelet[2793]: E0707 00:53:09.127407 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.127484 kubelet[2793]: W0707 00:53:09.127424 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.127484 kubelet[2793]: E0707 00:53:09.127473 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.131502 kubelet[2793]: E0707 00:53:09.130038 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.131502 kubelet[2793]: W0707 00:53:09.130059 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.131502 kubelet[2793]: E0707 00:53:09.131045 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.133584 kubelet[2793]: E0707 00:53:09.132653 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.133584 kubelet[2793]: W0707 00:53:09.132685 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.134203 kubelet[2793]: E0707 00:53:09.133805 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.158695 kubelet[2793]: E0707 00:53:09.158457 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.158695 kubelet[2793]: W0707 00:53:09.158488 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.159172 kubelet[2793]: E0707 00:53:09.159002 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.159387 kubelet[2793]: E0707 00:53:09.159366 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.159663 kubelet[2793]: W0707 00:53:09.159507 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.159926 kubelet[2793]: E0707 00:53:09.159822 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.159926 kubelet[2793]: W0707 00:53:09.159835 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.159926 kubelet[2793]: E0707 00:53:09.159830 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.159926 kubelet[2793]: E0707 00:53:09.159877 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.161332 kubelet[2793]: E0707 00:53:09.161030 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.161332 kubelet[2793]: W0707 00:53:09.161044 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.161826 kubelet[2793]: E0707 00:53:09.161385 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.163039 kubelet[2793]: E0707 00:53:09.162299 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.163039 kubelet[2793]: W0707 00:53:09.162312 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.163039 kubelet[2793]: E0707 00:53:09.162819 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.163039 kubelet[2793]: E0707 00:53:09.163001 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.164513 kubelet[2793]: W0707 00:53:09.163011 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.164513 kubelet[2793]: E0707 00:53:09.164263 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.164513 kubelet[2793]: W0707 00:53:09.164274 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.164809 kubelet[2793]: E0707 00:53:09.164795 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.166531 kubelet[2793]: W0707 00:53:09.164941 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.166531 kubelet[2793]: E0707 00:53:09.165908 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.166531 kubelet[2793]: E0707 00:53:09.165941 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.166531 kubelet[2793]: E0707 00:53:09.165953 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.167229 kubelet[2793]: E0707 00:53:09.167043 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.167229 kubelet[2793]: W0707 00:53:09.167063 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.167229 kubelet[2793]: E0707 00:53:09.167131 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.168376 kubelet[2793]: E0707 00:53:09.167841 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.168376 kubelet[2793]: W0707 00:53:09.167856 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.168376 kubelet[2793]: E0707 00:53:09.167997 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.170068 kubelet[2793]: E0707 00:53:09.169859 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.170068 kubelet[2793]: W0707 00:53:09.169873 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.170355 kubelet[2793]: E0707 00:53:09.170194 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.170355 kubelet[2793]: W0707 00:53:09.170207 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171043 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171054 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171141 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.172107 kubelet[2793]: W0707 00:53:09.171179 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171284 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171512 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.172107 kubelet[2793]: W0707 00:53:09.171523 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171755 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.172107 kubelet[2793]: W0707 00:53:09.171778 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.172107 kubelet[2793]: E0707 00:53:09.171789 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.174645 kubelet[2793]: E0707 00:53:09.172419 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.174645 kubelet[2793]: E0707 00:53:09.172999 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.174645 kubelet[2793]: W0707 00:53:09.173042 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.174645 kubelet[2793]: E0707 00:53:09.173085 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.174645 kubelet[2793]: E0707 00:53:09.173272 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.174645 kubelet[2793]: W0707 00:53:09.173290 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.174645 kubelet[2793]: E0707 00:53:09.173306 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.191081 kubelet[2793]: E0707 00:53:09.190710 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:09.191081 kubelet[2793]: W0707 00:53:09.190739 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:09.191081 kubelet[2793]: E0707 00:53:09.190766 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:09.233562 containerd[1579]: time="2025-07-07T00:53:09.233485301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vqpnj,Uid:998702a2-fe40-4a18-b025-813227c8e741,Namespace:calico-system,Attempt:0,} returns sandbox id \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\"" Jul 7 00:53:10.418645 kubelet[2793]: E0707 00:53:10.414935 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:11.519839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621347163.mount: Deactivated successfully. Jul 7 00:53:12.414620 kubelet[2793]: E0707 00:53:12.414036 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:13.090519 containerd[1579]: time="2025-07-07T00:53:13.089057538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:13.094819 containerd[1579]: time="2025-07-07T00:53:13.091943932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:53:13.095275 containerd[1579]: time="2025-07-07T00:53:13.095214035Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:13.100141 containerd[1579]: time="2025-07-07T00:53:13.100061508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:13.101304 containerd[1579]: time="2025-07-07T00:53:13.101264796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.212528336s" Jul 7 00:53:13.101755 containerd[1579]: time="2025-07-07T00:53:13.101711814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:53:13.106721 containerd[1579]: time="2025-07-07T00:53:13.105532220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:53:13.133404 containerd[1579]: time="2025-07-07T00:53:13.133129535Z" level=info msg="CreateContainer within sandbox \"82bad23255414f4fcf50c88dc56c7a812e08ca02139fc9c5d5fb7e806eb977b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:53:13.170137 containerd[1579]: time="2025-07-07T00:53:13.170012007Z" level=info msg="CreateContainer within sandbox \"82bad23255414f4fcf50c88dc56c7a812e08ca02139fc9c5d5fb7e806eb977b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f5f6f1150c37dbc321a4d3be92ab0789f715b03562724d66f578c746c2ea7191\"" Jul 7 00:53:13.170792 containerd[1579]: time="2025-07-07T00:53:13.170742968Z" level=info msg="StartContainer for \"f5f6f1150c37dbc321a4d3be92ab0789f715b03562724d66f578c746c2ea7191\"" Jul 7 00:53:13.276826 containerd[1579]: time="2025-07-07T00:53:13.276553429Z" level=info msg="StartContainer for \"f5f6f1150c37dbc321a4d3be92ab0789f715b03562724d66f578c746c2ea7191\" returns successfully" Jul 7 00:53:13.662599 kubelet[2793]: E0707 00:53:13.662193 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.662599 kubelet[2793]: W0707 00:53:13.662251 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.662599 kubelet[2793]: E0707 00:53:13.662324 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.665416 kubelet[2793]: E0707 00:53:13.663386 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.665416 kubelet[2793]: W0707 00:53:13.663400 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.665416 kubelet[2793]: E0707 00:53:13.663431 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.665416 kubelet[2793]: E0707 00:53:13.665301 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.665416 kubelet[2793]: W0707 00:53:13.665315 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.665416 kubelet[2793]: E0707 00:53:13.665330 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.665780 kubelet[2793]: E0707 00:53:13.665763 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.665780 kubelet[2793]: W0707 00:53:13.665774 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.665845 kubelet[2793]: E0707 00:53:13.665785 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.666632 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.667841 kubelet[2793]: W0707 00:53:13.666643 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.666654 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.666801 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.667841 kubelet[2793]: W0707 00:53:13.666810 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.666819 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.667486 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.667841 kubelet[2793]: W0707 00:53:13.667497 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.667509 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.667841 kubelet[2793]: E0707 00:53:13.667668 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.671271 kubelet[2793]: W0707 00:53:13.667678 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.667687 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.668120 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.671271 kubelet[2793]: W0707 00:53:13.668131 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.668142 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.668315 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.671271 kubelet[2793]: W0707 00:53:13.668325 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.668395 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.671271 kubelet[2793]: E0707 00:53:13.669217 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.671271 kubelet[2793]: W0707 00:53:13.669228 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669239 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669431 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676084 kubelet[2793]: W0707 00:53:13.669441 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669450 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669795 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676084 kubelet[2793]: W0707 00:53:13.669807 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669817 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669970 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676084 kubelet[2793]: W0707 00:53:13.669980 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676084 kubelet[2793]: E0707 00:53:13.669990 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.670160 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676455 kubelet[2793]: W0707 00:53:13.670170 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.670180 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.670992 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676455 kubelet[2793]: W0707 00:53:13.671040 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.671054 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.676091 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676455 kubelet[2793]: W0707 00:53:13.676198 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676455 kubelet[2793]: E0707 00:53:13.676312 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.676892 kubelet[2793]: E0707 00:53:13.676873 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.676892 kubelet[2793]: W0707 00:53:13.676889 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.676987 kubelet[2793]: E0707 00:53:13.676971 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.677211 kubelet[2793]: E0707 00:53:13.677171 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.677211 kubelet[2793]: W0707 00:53:13.677187 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.677594 kubelet[2793]: E0707 00:53:13.677274 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.677594 kubelet[2793]: E0707 00:53:13.677455 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.677594 kubelet[2793]: W0707 00:53:13.677464 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.677594 kubelet[2793]: E0707 00:53:13.677490 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.677687 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.678834 kubelet[2793]: W0707 00:53:13.677697 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.677710 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.677895 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.678834 kubelet[2793]: W0707 00:53:13.677905 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.677927 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.678547 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.678834 kubelet[2793]: W0707 00:53:13.678559 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.678834 kubelet[2793]: E0707 00:53:13.678649 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679276 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.679926 kubelet[2793]: W0707 00:53:13.679292 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679389 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679514 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.679926 kubelet[2793]: W0707 00:53:13.679527 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679632 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679750 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.679926 kubelet[2793]: W0707 00:53:13.679760 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.679926 kubelet[2793]: E0707 00:53:13.679785 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.680015 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683040 kubelet[2793]: W0707 00:53:13.680026 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.680037 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.680685 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683040 kubelet[2793]: W0707 00:53:13.680697 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.680725 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.681173 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683040 kubelet[2793]: W0707 00:53:13.681184 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.681294 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683040 kubelet[2793]: E0707 00:53:13.682178 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683400 kubelet[2793]: W0707 00:53:13.682190 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683400 kubelet[2793]: E0707 00:53:13.682203 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683400 kubelet[2793]: E0707 00:53:13.682475 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683400 kubelet[2793]: W0707 00:53:13.682486 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683400 kubelet[2793]: E0707 00:53:13.682497 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.683400 kubelet[2793]: E0707 00:53:13.682789 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.683400 kubelet[2793]: W0707 00:53:13.682800 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.683400 kubelet[2793]: E0707 00:53:13.682811 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:13.735407 kubelet[2793]: E0707 00:53:13.734278 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:13.735407 kubelet[2793]: W0707 00:53:13.734302 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:13.735407 kubelet[2793]: E0707 00:53:13.734324 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.414443 kubelet[2793]: E0707 00:53:14.414029 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:14.621220 kubelet[2793]: I0707 00:53:14.621070 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:53:14.681610 kubelet[2793]: E0707 00:53:14.681059 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.681610 kubelet[2793]: W0707 00:53:14.681105 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.681610 kubelet[2793]: E0707 00:53:14.681151 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.682626 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.684590 kubelet[2793]: W0707 00:53:14.682654 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.682684 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.683487 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.684590 kubelet[2793]: W0707 00:53:14.683515 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.683542 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.684170 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.684590 kubelet[2793]: W0707 00:53:14.684197 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.684590 kubelet[2793]: E0707 00:53:14.684223 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.685486 kubelet[2793]: E0707 00:53:14.684963 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.685486 kubelet[2793]: W0707 00:53:14.684993 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.685486 kubelet[2793]: E0707 00:53:14.685020 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.685725 kubelet[2793]: E0707 00:53:14.685524 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.685725 kubelet[2793]: W0707 00:53:14.685549 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.685725 kubelet[2793]: E0707 00:53:14.685574 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.686905 kubelet[2793]: E0707 00:53:14.686818 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.686905 kubelet[2793]: W0707 00:53:14.686861 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.686905 kubelet[2793]: E0707 00:53:14.686889 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.688922 kubelet[2793]: E0707 00:53:14.688871 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.688922 kubelet[2793]: W0707 00:53:14.688913 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.689189 kubelet[2793]: E0707 00:53:14.688941 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.690003 kubelet[2793]: E0707 00:53:14.689944 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.690003 kubelet[2793]: W0707 00:53:14.689985 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.690013 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.690598 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.690494 kubelet[2793]: W0707 00:53:14.690625 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.690648 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.691101 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.690494 kubelet[2793]: W0707 00:53:14.691129 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.691154 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.691571 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.690494 kubelet[2793]: W0707 00:53:14.691596 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.690494 kubelet[2793]: E0707 00:53:14.691618 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.694130 kubelet[2793]: E0707 00:53:14.692237 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.694130 kubelet[2793]: W0707 00:53:14.692268 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.694130 kubelet[2793]: E0707 00:53:14.692321 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.694130 kubelet[2793]: E0707 00:53:14.693337 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.694130 kubelet[2793]: W0707 00:53:14.693466 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.694130 kubelet[2793]: E0707 00:53:14.693494 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.694780 kubelet[2793]: E0707 00:53:14.694537 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.694780 kubelet[2793]: W0707 00:53:14.694569 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.694780 kubelet[2793]: E0707 00:53:14.694597 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.695809 kubelet[2793]: E0707 00:53:14.695770 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.695809 kubelet[2793]: W0707 00:53:14.695806 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.696290 kubelet[2793]: E0707 00:53:14.695833 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.696588 kubelet[2793]: E0707 00:53:14.696531 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.696588 kubelet[2793]: W0707 00:53:14.696565 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.696962 kubelet[2793]: E0707 00:53:14.696604 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.697140 kubelet[2793]: E0707 00:53:14.697021 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.697140 kubelet[2793]: W0707 00:53:14.697045 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.697140 kubelet[2793]: E0707 00:53:14.697082 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.697673 kubelet[2793]: E0707 00:53:14.697638 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.697673 kubelet[2793]: W0707 00:53:14.697672 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.698078 kubelet[2793]: E0707 00:53:14.697751 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.698445 kubelet[2793]: E0707 00:53:14.698096 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.698445 kubelet[2793]: W0707 00:53:14.698122 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.698445 kubelet[2793]: E0707 00:53:14.698203 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.699166 kubelet[2793]: E0707 00:53:14.698538 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.699166 kubelet[2793]: W0707 00:53:14.698562 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.699166 kubelet[2793]: E0707 00:53:14.698777 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.699166 kubelet[2793]: E0707 00:53:14.699009 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.699166 kubelet[2793]: W0707 00:53:14.699036 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.699166 kubelet[2793]: E0707 00:53:14.699066 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.700154 kubelet[2793]: E0707 00:53:14.699488 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.700154 kubelet[2793]: W0707 00:53:14.699517 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.700154 kubelet[2793]: E0707 00:53:14.699557 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.700154 kubelet[2793]: E0707 00:53:14.700092 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.700154 kubelet[2793]: W0707 00:53:14.700148 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.700791 kubelet[2793]: E0707 00:53:14.700188 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.701681 kubelet[2793]: E0707 00:53:14.701520 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.701681 kubelet[2793]: W0707 00:53:14.701559 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.701681 kubelet[2793]: E0707 00:53:14.701613 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.702206 kubelet[2793]: E0707 00:53:14.702120 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.702206 kubelet[2793]: W0707 00:53:14.702147 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.702496 kubelet[2793]: E0707 00:53:14.702288 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.702811 kubelet[2793]: E0707 00:53:14.702756 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.702811 kubelet[2793]: W0707 00:53:14.702790 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.703036 kubelet[2793]: E0707 00:53:14.702932 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.703305 kubelet[2793]: E0707 00:53:14.703273 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.703305 kubelet[2793]: W0707 00:53:14.703303 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.703569 kubelet[2793]: E0707 00:53:14.703426 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.703945 kubelet[2793]: E0707 00:53:14.703909 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.703945 kubelet[2793]: W0707 00:53:14.703942 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.704180 kubelet[2793]: E0707 00:53:14.704010 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.704976 kubelet[2793]: E0707 00:53:14.704932 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.705343 kubelet[2793]: W0707 00:53:14.705129 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.705343 kubelet[2793]: E0707 00:53:14.705193 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.706768 kubelet[2793]: E0707 00:53:14.706282 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.706768 kubelet[2793]: W0707 00:53:14.706308 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.706768 kubelet[2793]: E0707 00:53:14.706406 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.708245 kubelet[2793]: E0707 00:53:14.706783 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.708245 kubelet[2793]: W0707 00:53:14.708225 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.708245 kubelet[2793]: E0707 00:53:14.708256 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:14.711790 kubelet[2793]: E0707 00:53:14.711565 2793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:53:14.711790 kubelet[2793]: W0707 00:53:14.711606 2793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:53:14.711790 kubelet[2793]: E0707 00:53:14.711640 2793 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:53:15.424889 kubelet[2793]: E0707 00:53:15.423857 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:15.842459 containerd[1579]: time="2025-07-07T00:53:15.841475405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:15.844619 containerd[1579]: time="2025-07-07T00:53:15.844495650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:53:15.847837 containerd[1579]: time="2025-07-07T00:53:15.846441820Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:15.851124 containerd[1579]: time="2025-07-07T00:53:15.850060117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:15.851124 containerd[1579]: time="2025-07-07T00:53:15.850874034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.745257916s" Jul 7 00:53:15.851124 containerd[1579]: time="2025-07-07T00:53:15.850926162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:53:15.857646 containerd[1579]: time="2025-07-07T00:53:15.857604609Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:53:15.892882 containerd[1579]: time="2025-07-07T00:53:15.892836604Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5\"" Jul 7 00:53:15.895624 containerd[1579]: time="2025-07-07T00:53:15.895553940Z" level=info msg="StartContainer for \"a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5\"" Jul 7 00:53:16.011761 containerd[1579]: time="2025-07-07T00:53:16.011711618Z" level=info msg="StartContainer for \"a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5\" returns successfully" Jul 7 00:53:16.698060 kubelet[2793]: I0707 00:53:16.697688 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58f9d9c67d-cbfbx" podStartSLOduration=4.4812293180000005 podStartE2EDuration="8.697455892s" podCreationTimestamp="2025-07-07 00:53:08 +0000 UTC" firstStartedPulling="2025-07-07 00:53:08.887154531 +0000 UTC m=+22.697833272" lastFinishedPulling="2025-07-07 00:53:13.103381115 +0000 UTC m=+26.914059846" observedRunningTime="2025-07-07 00:53:13.663649277 +0000 UTC m=+27.474328008" watchObservedRunningTime="2025-07-07 00:53:16.697455892 +0000 UTC m=+30.508134673" Jul 7 00:53:16.812893 containerd[1579]: time="2025-07-07T00:53:16.812074932Z" level=info msg="shim disconnected" id=a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5 namespace=k8s.io Jul 7 00:53:16.812893 containerd[1579]: time="2025-07-07T00:53:16.812532700Z" level=warning msg="cleaning up after shim disconnected" id=a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5 namespace=k8s.io Jul 7 00:53:16.812893 containerd[1579]: time="2025-07-07T00:53:16.812575871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:53:16.883107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a79346782bb2d493cefc9598fc39ef549c1e2e93f1a70608bcc1f1b486657da5-rootfs.mount: Deactivated successfully. Jul 7 00:53:17.413919 kubelet[2793]: E0707 00:53:17.413667 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:17.650878 containerd[1579]: time="2025-07-07T00:53:17.647867051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:53:19.414522 kubelet[2793]: E0707 00:53:19.414387 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:21.413739 kubelet[2793]: E0707 00:53:21.413630 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:23.414657 kubelet[2793]: E0707 00:53:23.414555 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:23.761431 containerd[1579]: time="2025-07-07T00:53:23.761202649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:23.764520 containerd[1579]: time="2025-07-07T00:53:23.764456953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:53:23.766141 containerd[1579]: time="2025-07-07T00:53:23.765701288Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:23.769320 containerd[1579]: time="2025-07-07T00:53:23.769251910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:23.770254 containerd[1579]: time="2025-07-07T00:53:23.770198244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.122224873s" Jul 7 00:53:23.770254 containerd[1579]: time="2025-07-07T00:53:23.770240514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:53:23.775660 containerd[1579]: time="2025-07-07T00:53:23.775298046Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:53:23.804363 containerd[1579]: time="2025-07-07T00:53:23.804207926Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0\"" Jul 7 00:53:23.807045 containerd[1579]: time="2025-07-07T00:53:23.806962458Z" level=info msg="StartContainer for \"02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0\"" Jul 7 00:53:23.926991 containerd[1579]: time="2025-07-07T00:53:23.926928014Z" level=info msg="StartContainer for \"02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0\" returns successfully" Jul 7 00:53:25.415500 kubelet[2793]: E0707 00:53:25.414472 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:26.295129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0-rootfs.mount: Deactivated successfully. Jul 7 00:53:26.335749 kubelet[2793]: I0707 00:53:26.335695 2793 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 00:53:26.763717 kubelet[2793]: I0707 00:53:26.763572 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhznh\" (UniqueName: \"kubernetes.io/projected/a5b425c3-bad4-4558-89be-6136a807f762-kube-api-access-jhznh\") pod \"coredns-7c65d6cfc9-92wpl\" (UID: \"a5b425c3-bad4-4558-89be-6136a807f762\") " pod="kube-system/coredns-7c65d6cfc9-92wpl" Jul 7 00:53:26.763717 kubelet[2793]: I0707 00:53:26.763636 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nck\" (UniqueName: \"kubernetes.io/projected/1946b93a-1ccd-4010-b1de-ece39cb252ae-kube-api-access-42nck\") pod \"calico-apiserver-667d8f9c7b-jbw72\" (UID: \"1946b93a-1ccd-4010-b1de-ece39cb252ae\") " pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" Jul 7 00:53:26.763717 kubelet[2793]: I0707 00:53:26.763662 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5b425c3-bad4-4558-89be-6136a807f762-config-volume\") pod \"coredns-7c65d6cfc9-92wpl\" (UID: \"a5b425c3-bad4-4558-89be-6136a807f762\") " pod="kube-system/coredns-7c65d6cfc9-92wpl" Jul 7 00:53:26.763717 kubelet[2793]: I0707 00:53:26.763685 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-ca-bundle\") pod \"whisker-5f6cfcc6f6-xs6qr\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " pod="calico-system/whisker-5f6cfcc6f6-xs6qr" Jul 7 00:53:26.763717 kubelet[2793]: I0707 00:53:26.763708 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4n6\" (UniqueName: \"kubernetes.io/projected/9df690de-c33d-44aa-bf8e-790d93d78321-kube-api-access-sg4n6\") pod \"calico-kube-controllers-745c5b8f57-jgbmg\" (UID: \"9df690de-c33d-44aa-bf8e-790d93d78321\") " pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" Jul 7 00:53:26.887846 kubelet[2793]: I0707 00:53:26.763755 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl58w\" (UniqueName: \"kubernetes.io/projected/278a2c58-53c9-4e5b-8c5e-0178026a9170-kube-api-access-nl58w\") pod \"goldmane-58fd7646b9-ffzjz\" (UID: \"278a2c58-53c9-4e5b-8c5e-0178026a9170\") " pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:26.887846 kubelet[2793]: I0707 00:53:26.763783 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9df690de-c33d-44aa-bf8e-790d93d78321-tigera-ca-bundle\") pod \"calico-kube-controllers-745c5b8f57-jgbmg\" (UID: \"9df690de-c33d-44aa-bf8e-790d93d78321\") " pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" Jul 7 00:53:26.887846 kubelet[2793]: I0707 00:53:26.763814 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278a2c58-53c9-4e5b-8c5e-0178026a9170-config\") pod \"goldmane-58fd7646b9-ffzjz\" (UID: \"278a2c58-53c9-4e5b-8c5e-0178026a9170\") " pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:26.887846 kubelet[2793]: I0707 00:53:26.763866 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/278a2c58-53c9-4e5b-8c5e-0178026a9170-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ffzjz\" (UID: \"278a2c58-53c9-4e5b-8c5e-0178026a9170\") " pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:26.887846 kubelet[2793]: I0707 00:53:26.763889 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a1c7885-17d4-45e1-bbd0-9b5b19862e2d-calico-apiserver-certs\") pod \"calico-apiserver-667d8f9c7b-s8qd4\" (UID: \"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d\") " pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" Jul 7 00:53:26.888311 kubelet[2793]: I0707 00:53:26.763921 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-backend-key-pair\") pod \"whisker-5f6cfcc6f6-xs6qr\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " pod="calico-system/whisker-5f6cfcc6f6-xs6qr" Jul 7 00:53:26.888311 kubelet[2793]: I0707 00:53:26.763950 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8f8n\" (UniqueName: \"kubernetes.io/projected/36a72e2c-f519-4613-b65a-5c98b45d54b9-kube-api-access-g8f8n\") pod \"coredns-7c65d6cfc9-ncwdh\" (UID: \"36a72e2c-f519-4613-b65a-5c98b45d54b9\") " pod="kube-system/coredns-7c65d6cfc9-ncwdh" Jul 7 00:53:26.888311 kubelet[2793]: I0707 00:53:26.763978 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1946b93a-1ccd-4010-b1de-ece39cb252ae-calico-apiserver-certs\") pod \"calico-apiserver-667d8f9c7b-jbw72\" (UID: \"1946b93a-1ccd-4010-b1de-ece39cb252ae\") " pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" Jul 7 00:53:26.888311 kubelet[2793]: I0707 00:53:26.764012 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/278a2c58-53c9-4e5b-8c5e-0178026a9170-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ffzjz\" (UID: \"278a2c58-53c9-4e5b-8c5e-0178026a9170\") " pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:26.888311 kubelet[2793]: I0707 00:53:26.764045 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4xr\" (UniqueName: \"kubernetes.io/projected/8a1c7885-17d4-45e1-bbd0-9b5b19862e2d-kube-api-access-nx4xr\") pod \"calico-apiserver-667d8f9c7b-s8qd4\" (UID: \"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d\") " pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" Jul 7 00:53:26.897120 kubelet[2793]: I0707 00:53:26.764064 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzb4\" (UniqueName: \"kubernetes.io/projected/21eff24d-2230-403e-a20d-c63a9466fe87-kube-api-access-crzb4\") pod \"whisker-5f6cfcc6f6-xs6qr\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " pod="calico-system/whisker-5f6cfcc6f6-xs6qr" Jul 7 00:53:26.897120 kubelet[2793]: I0707 00:53:26.764093 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36a72e2c-f519-4613-b65a-5c98b45d54b9-config-volume\") pod \"coredns-7c65d6cfc9-ncwdh\" (UID: \"36a72e2c-f519-4613-b65a-5c98b45d54b9\") " pod="kube-system/coredns-7c65d6cfc9-ncwdh" Jul 7 00:53:26.913092 containerd[1579]: time="2025-07-07T00:53:26.912611709Z" level=info msg="shim disconnected" id=02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0 namespace=k8s.io Jul 7 00:53:26.913092 containerd[1579]: time="2025-07-07T00:53:26.913096572Z" level=warning msg="cleaning up after shim disconnected" id=02b570e0404bb16019a3a32027dea1b11c8295025a4b8bbf70539c67df44e9c0 namespace=k8s.io Jul 7 00:53:26.920254 containerd[1579]: time="2025-07-07T00:53:26.913178096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:53:27.207595 containerd[1579]: time="2025-07-07T00:53:27.205059056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c5b8f57-jgbmg,Uid:9df690de-c33d-44aa-bf8e-790d93d78321,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:27.235297 containerd[1579]: time="2025-07-07T00:53:27.235190119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-92wpl,Uid:a5b425c3-bad4-4558-89be-6136a807f762,Namespace:kube-system,Attempt:0,}" Jul 7 00:53:27.243647 containerd[1579]: time="2025-07-07T00:53:27.243485717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-jbw72,Uid:1946b93a-1ccd-4010-b1de-ece39cb252ae,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:53:27.245443 containerd[1579]: time="2025-07-07T00:53:27.244538039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ffzjz,Uid:278a2c58-53c9-4e5b-8c5e-0178026a9170,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:27.248634 containerd[1579]: time="2025-07-07T00:53:27.248250272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-s8qd4,Uid:8a1c7885-17d4-45e1-bbd0-9b5b19862e2d,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:53:27.250149 containerd[1579]: time="2025-07-07T00:53:27.249954463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f6cfcc6f6-xs6qr,Uid:21eff24d-2230-403e-a20d-c63a9466fe87,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:27.314388 containerd[1579]: time="2025-07-07T00:53:27.310986267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncwdh,Uid:36a72e2c-f519-4613-b65a-5c98b45d54b9,Namespace:kube-system,Attempt:0,}" Jul 7 00:53:27.406751 kubelet[2793]: I0707 00:53:27.406681 2793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:53:27.422465 containerd[1579]: time="2025-07-07T00:53:27.422391418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zql2q,Uid:c53a8470-3943-407f-8401-5976894cd214,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:27.536664 containerd[1579]: time="2025-07-07T00:53:27.536573159Z" level=error msg="Failed to destroy network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.537897 containerd[1579]: time="2025-07-07T00:53:27.537862889Z" level=error msg="encountered an error cleaning up failed sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.538076 containerd[1579]: time="2025-07-07T00:53:27.538046575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c5b8f57-jgbmg,Uid:9df690de-c33d-44aa-bf8e-790d93d78321,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.539338 kubelet[2793]: E0707 00:53:27.539236 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.539628 kubelet[2793]: E0707 00:53:27.539592 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" Jul 7 00:53:27.539921 kubelet[2793]: E0707 00:53:27.539647 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" Jul 7 00:53:27.539921 kubelet[2793]: E0707 00:53:27.539707 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745c5b8f57-jgbmg_calico-system(9df690de-c33d-44aa-bf8e-790d93d78321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745c5b8f57-jgbmg_calico-system(9df690de-c33d-44aa-bf8e-790d93d78321)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" podUID="9df690de-c33d-44aa-bf8e-790d93d78321" Jul 7 00:53:27.673939 containerd[1579]: time="2025-07-07T00:53:27.672502380Z" level=error msg="Failed to destroy network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.673939 containerd[1579]: time="2025-07-07T00:53:27.673017901Z" level=error msg="encountered an error cleaning up failed sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.673939 containerd[1579]: time="2025-07-07T00:53:27.673098493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-92wpl,Uid:a5b425c3-bad4-4558-89be-6136a807f762,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.675043 kubelet[2793]: E0707 00:53:27.673450 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.675043 kubelet[2793]: E0707 00:53:27.673513 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-92wpl" Jul 7 00:53:27.675043 kubelet[2793]: E0707 00:53:27.673536 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-92wpl" Jul 7 00:53:27.676228 kubelet[2793]: E0707 00:53:27.673619 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-92wpl_kube-system(a5b425c3-bad4-4558-89be-6136a807f762)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-92wpl_kube-system(a5b425c3-bad4-4558-89be-6136a807f762)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-92wpl" podUID="a5b425c3-bad4-4558-89be-6136a807f762" Jul 7 00:53:27.684113 containerd[1579]: time="2025-07-07T00:53:27.682383874Z" level=error msg="Failed to destroy network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.685680 containerd[1579]: time="2025-07-07T00:53:27.685643586Z" level=error msg="encountered an error cleaning up failed sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.685760 containerd[1579]: time="2025-07-07T00:53:27.685702296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f6cfcc6f6-xs6qr,Uid:21eff24d-2230-403e-a20d-c63a9466fe87,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.688122 kubelet[2793]: E0707 00:53:27.688078 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.688224 kubelet[2793]: E0707 00:53:27.688137 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f6cfcc6f6-xs6qr" Jul 7 00:53:27.688224 kubelet[2793]: E0707 00:53:27.688160 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f6cfcc6f6-xs6qr" Jul 7 00:53:27.688224 kubelet[2793]: E0707 00:53:27.688202 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f6cfcc6f6-xs6qr_calico-system(21eff24d-2230-403e-a20d-c63a9466fe87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f6cfcc6f6-xs6qr_calico-system(21eff24d-2230-403e-a20d-c63a9466fe87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f6cfcc6f6-xs6qr" podUID="21eff24d-2230-403e-a20d-c63a9466fe87" Jul 7 00:53:27.704691 kubelet[2793]: I0707 00:53:27.704636 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:27.713706 containerd[1579]: time="2025-07-07T00:53:27.713642091Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:27.713945 containerd[1579]: time="2025-07-07T00:53:27.713911007Z" level=info msg="Ensure that sandbox e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba in task-service has been cleanup successfully" Jul 7 00:53:27.729260 containerd[1579]: time="2025-07-07T00:53:27.729204428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:53:27.740012 kubelet[2793]: I0707 00:53:27.739970 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:27.742501 containerd[1579]: time="2025-07-07T00:53:27.742267717Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:53:27.747687 containerd[1579]: time="2025-07-07T00:53:27.746120014Z" level=info msg="Ensure that sandbox 203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0 in task-service has been cleanup successfully" Jul 7 00:53:27.753261 kubelet[2793]: I0707 00:53:27.753224 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:27.756980 containerd[1579]: time="2025-07-07T00:53:27.756164716Z" level=error msg="Failed to destroy network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.758257 containerd[1579]: time="2025-07-07T00:53:27.758081447Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:53:27.758655 containerd[1579]: time="2025-07-07T00:53:27.758631744Z" level=info msg="Ensure that sandbox be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210 in task-service has been cleanup successfully" Jul 7 00:53:27.763533 containerd[1579]: time="2025-07-07T00:53:27.763483625Z" level=error msg="encountered an error cleaning up failed sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.763763 containerd[1579]: time="2025-07-07T00:53:27.763733215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-jbw72,Uid:1946b93a-1ccd-4010-b1de-ece39cb252ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.766438 kubelet[2793]: E0707 00:53:27.766387 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.766822 kubelet[2793]: E0707 00:53:27.766455 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" Jul 7 00:53:27.766822 kubelet[2793]: E0707 00:53:27.766483 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" Jul 7 00:53:27.766822 kubelet[2793]: E0707 00:53:27.766539 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667d8f9c7b-jbw72_calico-apiserver(1946b93a-1ccd-4010-b1de-ece39cb252ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667d8f9c7b-jbw72_calico-apiserver(1946b93a-1ccd-4010-b1de-ece39cb252ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" podUID="1946b93a-1ccd-4010-b1de-ece39cb252ae" Jul 7 00:53:27.833486 containerd[1579]: time="2025-07-07T00:53:27.832574832Z" level=error msg="Failed to destroy network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.833486 containerd[1579]: time="2025-07-07T00:53:27.833020361Z" level=error msg="Failed to destroy network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.834003 containerd[1579]: time="2025-07-07T00:53:27.833979868Z" level=error msg="encountered an error cleaning up failed sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.834376 containerd[1579]: time="2025-07-07T00:53:27.834039070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ffzjz,Uid:278a2c58-53c9-4e5b-8c5e-0178026a9170,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.836205 kubelet[2793]: E0707 00:53:27.834550 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.836205 kubelet[2793]: E0707 00:53:27.835210 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:27.836205 kubelet[2793]: E0707 00:53:27.835269 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ffzjz" Jul 7 00:53:27.837028 containerd[1579]: time="2025-07-07T00:53:27.835235384Z" level=error msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" failed" error="failed to destroy network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.837076 kubelet[2793]: E0707 00:53:27.835464 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ffzjz_calico-system(278a2c58-53c9-4e5b-8c5e-0178026a9170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ffzjz_calico-system(278a2c58-53c9-4e5b-8c5e-0178026a9170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ffzjz" podUID="278a2c58-53c9-4e5b-8c5e-0178026a9170" Jul 7 00:53:27.840026 kubelet[2793]: E0707 00:53:27.839853 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:27.840026 kubelet[2793]: E0707 00:53:27.839949 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba"} Jul 7 00:53:27.840583 kubelet[2793]: E0707 00:53:27.840060 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21eff24d-2230-403e-a20d-c63a9466fe87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:27.840583 kubelet[2793]: E0707 00:53:27.840100 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21eff24d-2230-403e-a20d-c63a9466fe87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f6cfcc6f6-xs6qr" podUID="21eff24d-2230-403e-a20d-c63a9466fe87" Jul 7 00:53:27.840737 containerd[1579]: time="2025-07-07T00:53:27.840270920Z" level=error msg="encountered an error cleaning up failed sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.841570 containerd[1579]: time="2025-07-07T00:53:27.840563041Z" level=error msg="Failed to destroy network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.841570 containerd[1579]: time="2025-07-07T00:53:27.841486710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-s8qd4,Uid:8a1c7885-17d4-45e1-bbd0-9b5b19862e2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.842520 kubelet[2793]: E0707 00:53:27.842479 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.842673 kubelet[2793]: E0707 00:53:27.842633 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" Jul 7 00:53:27.843111 kubelet[2793]: E0707 00:53:27.842676 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" Jul 7 00:53:27.843111 kubelet[2793]: E0707 00:53:27.842820 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-667d8f9c7b-s8qd4_calico-apiserver(8a1c7885-17d4-45e1-bbd0-9b5b19862e2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-667d8f9c7b-s8qd4_calico-apiserver(8a1c7885-17d4-45e1-bbd0-9b5b19862e2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" podUID="8a1c7885-17d4-45e1-bbd0-9b5b19862e2d" Jul 7 00:53:27.847166 containerd[1579]: time="2025-07-07T00:53:27.846892895Z" level=error msg="encountered an error cleaning up failed sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.847359 containerd[1579]: time="2025-07-07T00:53:27.847192880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zql2q,Uid:c53a8470-3943-407f-8401-5976894cd214,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.848370 kubelet[2793]: E0707 00:53:27.848122 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.848370 kubelet[2793]: E0707 00:53:27.848194 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:27.848370 kubelet[2793]: E0707 00:53:27.848224 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zql2q" Jul 7 00:53:27.848569 kubelet[2793]: E0707 00:53:27.848283 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zql2q_calico-system(c53a8470-3943-407f-8401-5976894cd214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zql2q_calico-system(c53a8470-3943-407f-8401-5976894cd214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:27.857681 containerd[1579]: time="2025-07-07T00:53:27.857241320Z" level=error msg="Failed to destroy network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.858668 containerd[1579]: time="2025-07-07T00:53:27.858506694Z" level=error msg="encountered an error cleaning up failed sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.858841 containerd[1579]: time="2025-07-07T00:53:27.858691241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncwdh,Uid:36a72e2c-f519-4613-b65a-5c98b45d54b9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.859462 kubelet[2793]: E0707 00:53:27.859173 2793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.859462 kubelet[2793]: E0707 00:53:27.859259 2793 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncwdh" Jul 7 00:53:27.859462 kubelet[2793]: E0707 00:53:27.859283 2793 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ncwdh" Jul 7 00:53:27.859639 kubelet[2793]: E0707 00:53:27.859339 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ncwdh_kube-system(36a72e2c-f519-4613-b65a-5c98b45d54b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ncwdh_kube-system(36a72e2c-f519-4613-b65a-5c98b45d54b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ncwdh" podUID="36a72e2c-f519-4613-b65a-5c98b45d54b9" Jul 7 00:53:27.873318 containerd[1579]: time="2025-07-07T00:53:27.873174265Z" level=error msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" failed" error="failed to destroy network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.873572 kubelet[2793]: E0707 00:53:27.873445 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:27.873572 kubelet[2793]: E0707 00:53:27.873508 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210"} Jul 7 00:53:27.873572 kubelet[2793]: E0707 00:53:27.873550 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9df690de-c33d-44aa-bf8e-790d93d78321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:27.873945 kubelet[2793]: E0707 00:53:27.873577 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9df690de-c33d-44aa-bf8e-790d93d78321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" podUID="9df690de-c33d-44aa-bf8e-790d93d78321" Jul 7 00:53:27.879240 containerd[1579]: time="2025-07-07T00:53:27.879107372Z" level=error msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" failed" error="failed to destroy network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:27.879373 kubelet[2793]: E0707 00:53:27.879331 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:27.879430 kubelet[2793]: E0707 00:53:27.879388 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0"} Jul 7 00:53:27.879430 kubelet[2793]: E0707 00:53:27.879420 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5b425c3-bad4-4558-89be-6136a807f762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:27.879547 kubelet[2793]: E0707 00:53:27.879443 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5b425c3-bad4-4558-89be-6136a807f762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-92wpl" podUID="a5b425c3-bad4-4558-89be-6136a807f762" Jul 7 00:53:28.301679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3-shm.mount: Deactivated successfully. Jul 7 00:53:28.302220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0-shm.mount: Deactivated successfully. Jul 7 00:53:28.302669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210-shm.mount: Deactivated successfully. Jul 7 00:53:28.773144 kubelet[2793]: I0707 00:53:28.772958 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:28.777851 kubelet[2793]: I0707 00:53:28.776787 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:28.780928 containerd[1579]: time="2025-07-07T00:53:28.780818494Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:53:28.784770 containerd[1579]: time="2025-07-07T00:53:28.782025087Z" level=info msg="Ensure that sandbox 6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c in task-service has been cleanup successfully" Jul 7 00:53:28.784770 containerd[1579]: time="2025-07-07T00:53:28.781168053Z" level=info msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" Jul 7 00:53:28.787125 containerd[1579]: time="2025-07-07T00:53:28.785843820Z" level=info msg="Ensure that sandbox 148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d in task-service has been cleanup successfully" Jul 7 00:53:28.797544 kubelet[2793]: I0707 00:53:28.797185 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:28.811009 containerd[1579]: time="2025-07-07T00:53:28.807482680Z" level=info msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" Jul 7 00:53:28.811643 kubelet[2793]: I0707 00:53:28.811594 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:28.812375 containerd[1579]: time="2025-07-07T00:53:28.812131977Z" level=info msg="Ensure that sandbox 862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6 in task-service has been cleanup successfully" Jul 7 00:53:28.820015 containerd[1579]: time="2025-07-07T00:53:28.819869081Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:53:28.823849 containerd[1579]: time="2025-07-07T00:53:28.823775500Z" level=info msg="Ensure that sandbox 3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72 in task-service has been cleanup successfully" Jul 7 00:53:28.829448 kubelet[2793]: I0707 00:53:28.828294 2793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:28.830951 containerd[1579]: time="2025-07-07T00:53:28.830918795Z" level=info msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" Jul 7 00:53:28.835253 containerd[1579]: time="2025-07-07T00:53:28.835151188Z" level=info msg="Ensure that sandbox 0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3 in task-service has been cleanup successfully" Jul 7 00:53:28.905278 containerd[1579]: time="2025-07-07T00:53:28.905208759Z" level=error msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" failed" error="failed to destroy network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:28.907969 kubelet[2793]: E0707 00:53:28.905796 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:28.907969 kubelet[2793]: E0707 00:53:28.906219 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c"} Jul 7 00:53:28.907969 kubelet[2793]: E0707 00:53:28.906322 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36a72e2c-f519-4613-b65a-5c98b45d54b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:28.907969 kubelet[2793]: E0707 00:53:28.906400 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36a72e2c-f519-4613-b65a-5c98b45d54b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ncwdh" podUID="36a72e2c-f519-4613-b65a-5c98b45d54b9" Jul 7 00:53:28.910405 containerd[1579]: time="2025-07-07T00:53:28.909031109Z" level=error msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" failed" error="failed to destroy network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:28.910647 containerd[1579]: time="2025-07-07T00:53:28.910238934Z" level=error msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" failed" error="failed to destroy network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:28.912611 kubelet[2793]: E0707 00:53:28.912578 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:28.912839 kubelet[2793]: E0707 00:53:28.912817 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d"} Jul 7 00:53:28.912993 kubelet[2793]: E0707 00:53:28.912972 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1946b93a-1ccd-4010-b1de-ece39cb252ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:28.913280 kubelet[2793]: E0707 00:53:28.913142 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1946b93a-1ccd-4010-b1de-ece39cb252ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" podUID="1946b93a-1ccd-4010-b1de-ece39cb252ae" Jul 7 00:53:28.913280 kubelet[2793]: E0707 00:53:28.912743 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:28.913280 kubelet[2793]: E0707 00:53:28.913206 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6"} Jul 7 00:53:28.913280 kubelet[2793]: E0707 00:53:28.913231 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:28.913640 kubelet[2793]: E0707 00:53:28.913254 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" podUID="8a1c7885-17d4-45e1-bbd0-9b5b19862e2d" Jul 7 00:53:28.921078 containerd[1579]: time="2025-07-07T00:53:28.921011746Z" level=error msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" failed" error="failed to destroy network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:28.921391 kubelet[2793]: E0707 00:53:28.921323 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:28.921531 kubelet[2793]: E0707 00:53:28.921407 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72"} Jul 7 00:53:28.921531 kubelet[2793]: E0707 00:53:28.921450 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c53a8470-3943-407f-8401-5976894cd214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:28.921531 kubelet[2793]: E0707 00:53:28.921481 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c53a8470-3943-407f-8401-5976894cd214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:28.923125 containerd[1579]: time="2025-07-07T00:53:28.922757323Z" level=error msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" failed" error="failed to destroy network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:28.923249 kubelet[2793]: E0707 00:53:28.922957 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:28.923249 kubelet[2793]: E0707 00:53:28.923015 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3"} Jul 7 00:53:28.923249 kubelet[2793]: E0707 00:53:28.923054 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"278a2c58-53c9-4e5b-8c5e-0178026a9170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:28.923249 kubelet[2793]: E0707 00:53:28.923082 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"278a2c58-53c9-4e5b-8c5e-0178026a9170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ffzjz" podUID="278a2c58-53c9-4e5b-8c5e-0178026a9170" Jul 7 00:53:40.417694 containerd[1579]: time="2025-07-07T00:53:40.416866146Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:53:40.428822 containerd[1579]: time="2025-07-07T00:53:40.427253859Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:53:40.430434 containerd[1579]: time="2025-07-07T00:53:40.429267257Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:53:40.434617 containerd[1579]: time="2025-07-07T00:53:40.430603050Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:40.611325 containerd[1579]: time="2025-07-07T00:53:40.610918074Z" level=error msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" failed" error="failed to destroy network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:40.612415 kubelet[2793]: E0707 00:53:40.612317 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:40.613099 kubelet[2793]: E0707 00:53:40.612455 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0"} Jul 7 00:53:40.613099 kubelet[2793]: E0707 00:53:40.612521 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5b425c3-bad4-4558-89be-6136a807f762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:40.613099 kubelet[2793]: E0707 00:53:40.612565 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5b425c3-bad4-4558-89be-6136a807f762\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-92wpl" podUID="a5b425c3-bad4-4558-89be-6136a807f762" Jul 7 00:53:40.615917 kubelet[2793]: E0707 00:53:40.615495 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:40.615917 kubelet[2793]: E0707 00:53:40.615530 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba"} Jul 7 00:53:40.615917 kubelet[2793]: E0707 00:53:40.615578 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21eff24d-2230-403e-a20d-c63a9466fe87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:40.615917 kubelet[2793]: E0707 00:53:40.615608 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21eff24d-2230-403e-a20d-c63a9466fe87\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f6cfcc6f6-xs6qr" podUID="21eff24d-2230-403e-a20d-c63a9466fe87" Jul 7 00:53:40.616166 containerd[1579]: time="2025-07-07T00:53:40.615130349Z" level=error msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" failed" error="failed to destroy network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:40.627681 containerd[1579]: time="2025-07-07T00:53:40.626731175Z" level=error msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" failed" error="failed to destroy network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:40.628023 kubelet[2793]: E0707 00:53:40.627509 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:40.628023 kubelet[2793]: E0707 00:53:40.627594 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c"} Jul 7 00:53:40.628023 kubelet[2793]: E0707 00:53:40.627646 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36a72e2c-f519-4613-b65a-5c98b45d54b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:40.628610 kubelet[2793]: E0707 00:53:40.628233 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36a72e2c-f519-4613-b65a-5c98b45d54b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ncwdh" podUID="36a72e2c-f519-4613-b65a-5c98b45d54b9" Jul 7 00:53:40.629280 containerd[1579]: time="2025-07-07T00:53:40.629176215Z" level=error msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" failed" error="failed to destroy network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:40.629611 kubelet[2793]: E0707 00:53:40.629416 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:40.629611 kubelet[2793]: E0707 00:53:40.629471 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210"} Jul 7 00:53:40.629611 kubelet[2793]: E0707 00:53:40.629506 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9df690de-c33d-44aa-bf8e-790d93d78321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:40.629611 kubelet[2793]: E0707 00:53:40.629527 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9df690de-c33d-44aa-bf8e-790d93d78321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" podUID="9df690de-c33d-44aa-bf8e-790d93d78321" Jul 7 00:53:40.844043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508392823.mount: Deactivated successfully. Jul 7 00:53:40.918434 containerd[1579]: time="2025-07-07T00:53:40.918383739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:40.921168 containerd[1579]: time="2025-07-07T00:53:40.921126930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:53:40.923657 containerd[1579]: time="2025-07-07T00:53:40.923593130Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:40.929025 containerd[1579]: time="2025-07-07T00:53:40.928084550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:40.929025 containerd[1579]: time="2025-07-07T00:53:40.928858607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.199591761s" Jul 7 00:53:40.929025 containerd[1579]: time="2025-07-07T00:53:40.928898411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:53:41.092010 containerd[1579]: time="2025-07-07T00:53:41.091805531Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:53:41.308833 containerd[1579]: time="2025-07-07T00:53:41.308690756Z" level=info msg="CreateContainer within sandbox \"bbce5f5df4a05b88097f50a890cee2b27bcb58b96a36a6498632bbed2b129571\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714\"" Jul 7 00:53:41.310834 containerd[1579]: time="2025-07-07T00:53:41.310740592Z" level=info msg="StartContainer for \"fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714\"" Jul 7 00:53:41.417960 containerd[1579]: time="2025-07-07T00:53:41.416325113Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:53:41.480404 containerd[1579]: time="2025-07-07T00:53:41.478623001Z" level=error msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" failed" error="failed to destroy network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:53:41.481437 kubelet[2793]: E0707 00:53:41.480826 2793 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:41.481437 kubelet[2793]: E0707 00:53:41.480912 2793 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72"} Jul 7 00:53:41.481437 kubelet[2793]: E0707 00:53:41.480962 2793 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c53a8470-3943-407f-8401-5976894cd214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:53:41.481437 kubelet[2793]: E0707 00:53:41.481006 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c53a8470-3943-407f-8401-5976894cd214\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zql2q" podUID="c53a8470-3943-407f-8401-5976894cd214" Jul 7 00:53:41.518393 containerd[1579]: time="2025-07-07T00:53:41.515291723Z" level=info msg="StartContainer for \"fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714\" returns successfully" Jul 7 00:53:41.698729 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:53:41.699201 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:53:41.890374 containerd[1579]: time="2025-07-07T00:53:41.887421081Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:41.958997 kubelet[2793]: I0707 00:53:41.958736 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vqpnj" podStartSLOduration=2.262497137 podStartE2EDuration="33.958656001s" podCreationTimestamp="2025-07-07 00:53:08 +0000 UTC" firstStartedPulling="2025-07-07 00:53:09.235256443 +0000 UTC m=+23.045935174" lastFinishedPulling="2025-07-07 00:53:40.931415307 +0000 UTC m=+54.742094038" observedRunningTime="2025-07-07 00:53:41.952265068 +0000 UTC m=+55.762943799" watchObservedRunningTime="2025-07-07 00:53:41.958656001 +0000 UTC m=+55.769334742" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.152 [INFO][4102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.154 [INFO][4102] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" iface="eth0" netns="/var/run/netns/cni-97fc7e5c-d69d-1c0f-c873-9b7b4ccd1df0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.155 [INFO][4102] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" iface="eth0" netns="/var/run/netns/cni-97fc7e5c-d69d-1c0f-c873-9b7b4ccd1df0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.159 [INFO][4102] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" iface="eth0" netns="/var/run/netns/cni-97fc7e5c-d69d-1c0f-c873-9b7b4ccd1df0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.159 [INFO][4102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.159 [INFO][4102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.248 [INFO][4127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.250 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.250 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.263 [WARNING][4127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.263 [INFO][4127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.267 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:42.273994 containerd[1579]: 2025-07-07 00:53:42.271 [INFO][4102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:42.275938 containerd[1579]: time="2025-07-07T00:53:42.275496336Z" level=info msg="TearDown network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" successfully" Jul 7 00:53:42.275938 containerd[1579]: time="2025-07-07T00:53:42.275548665Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" returns successfully" Jul 7 00:53:42.280971 systemd[1]: run-netns-cni\x2d97fc7e5c\x2dd69d\x2d1c0f\x2dc873\x2d9b7b4ccd1df0.mount: Deactivated successfully. Jul 7 00:53:42.329093 kubelet[2793]: I0707 00:53:42.327838 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crzb4\" (UniqueName: \"kubernetes.io/projected/21eff24d-2230-403e-a20d-c63a9466fe87-kube-api-access-crzb4\") pod \"21eff24d-2230-403e-a20d-c63a9466fe87\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " Jul 7 00:53:42.329093 kubelet[2793]: I0707 00:53:42.327927 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-backend-key-pair\") pod \"21eff24d-2230-403e-a20d-c63a9466fe87\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " Jul 7 00:53:42.329093 kubelet[2793]: I0707 00:53:42.327964 2793 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-ca-bundle\") pod \"21eff24d-2230-403e-a20d-c63a9466fe87\" (UID: \"21eff24d-2230-403e-a20d-c63a9466fe87\") " Jul 7 00:53:42.329093 kubelet[2793]: I0707 00:53:42.328706 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "21eff24d-2230-403e-a20d-c63a9466fe87" (UID: "21eff24d-2230-403e-a20d-c63a9466fe87"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 00:53:42.336546 systemd[1]: var-lib-kubelet-pods-21eff24d\x2d2230\x2d403e\x2da20d\x2dc63a9466fe87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcrzb4.mount: Deactivated successfully. Jul 7 00:53:42.341822 kubelet[2793]: I0707 00:53:42.339237 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "21eff24d-2230-403e-a20d-c63a9466fe87" (UID: "21eff24d-2230-403e-a20d-c63a9466fe87"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 00:53:42.341822 kubelet[2793]: I0707 00:53:42.341618 2793 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21eff24d-2230-403e-a20d-c63a9466fe87-kube-api-access-crzb4" (OuterVolumeSpecName: "kube-api-access-crzb4") pod "21eff24d-2230-403e-a20d-c63a9466fe87" (UID: "21eff24d-2230-403e-a20d-c63a9466fe87"). InnerVolumeSpecName "kube-api-access-crzb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 00:53:42.342057 systemd[1]: var-lib-kubelet-pods-21eff24d\x2d2230\x2d403e\x2da20d\x2dc63a9466fe87-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:53:42.418210 containerd[1579]: time="2025-07-07T00:53:42.415697289Z" level=info msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" Jul 7 00:53:42.421409 containerd[1579]: time="2025-07-07T00:53:42.419541831Z" level=info msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" Jul 7 00:53:42.432209 kubelet[2793]: I0707 00:53:42.428493 2793 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crzb4\" (UniqueName: \"kubernetes.io/projected/21eff24d-2230-403e-a20d-c63a9466fe87-kube-api-access-crzb4\") on node \"ci-4081-3-4-7-8dfaddf5bb.novalocal\" DevicePath \"\"" Jul 7 00:53:42.432209 kubelet[2793]: I0707 00:53:42.428563 2793 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-backend-key-pair\") on node \"ci-4081-3-4-7-8dfaddf5bb.novalocal\" DevicePath \"\"" Jul 7 00:53:42.432209 kubelet[2793]: I0707 00:53:42.428593 2793 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21eff24d-2230-403e-a20d-c63a9466fe87-whisker-ca-bundle\") on node \"ci-4081-3-4-7-8dfaddf5bb.novalocal\" DevicePath \"\"" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.569 [INFO][4164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.572 [INFO][4164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" iface="eth0" netns="/var/run/netns/cni-ab51c32a-a260-ea54-5244-2b2fac21c292" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.573 [INFO][4164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" iface="eth0" netns="/var/run/netns/cni-ab51c32a-a260-ea54-5244-2b2fac21c292" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.573 [INFO][4164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" iface="eth0" netns="/var/run/netns/cni-ab51c32a-a260-ea54-5244-2b2fac21c292" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.573 [INFO][4164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.573 [INFO][4164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.611 [INFO][4180] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.612 [INFO][4180] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.612 [INFO][4180] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.621 [WARNING][4180] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.621 [INFO][4180] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.623 [INFO][4180] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:42.628499 containerd[1579]: 2025-07-07 00:53:42.625 [INFO][4164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:42.631572 containerd[1579]: time="2025-07-07T00:53:42.631511076Z" level=info msg="TearDown network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" successfully" Jul 7 00:53:42.631726 containerd[1579]: time="2025-07-07T00:53:42.631657621Z" level=info msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" returns successfully" Jul 7 00:53:42.634025 containerd[1579]: time="2025-07-07T00:53:42.633663525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-jbw72,Uid:1946b93a-1ccd-4010-b1de-ece39cb252ae,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.560 [INFO][4165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.560 [INFO][4165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" iface="eth0" netns="/var/run/netns/cni-08d06d25-e0a0-9928-dbed-0e60f8d53b5d" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.562 [INFO][4165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" iface="eth0" netns="/var/run/netns/cni-08d06d25-e0a0-9928-dbed-0e60f8d53b5d" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.563 [INFO][4165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" iface="eth0" netns="/var/run/netns/cni-08d06d25-e0a0-9928-dbed-0e60f8d53b5d" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.563 [INFO][4165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.563 [INFO][4165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.614 [INFO][4178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.614 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.623 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.636 [WARNING][4178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.636 [INFO][4178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.640 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:42.643852 containerd[1579]: 2025-07-07 00:53:42.642 [INFO][4165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:42.645070 containerd[1579]: time="2025-07-07T00:53:42.644624494Z" level=info msg="TearDown network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" successfully" Jul 7 00:53:42.645070 containerd[1579]: time="2025-07-07T00:53:42.644693303Z" level=info msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" returns successfully" Jul 7 00:53:42.647397 containerd[1579]: time="2025-07-07T00:53:42.646100620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-s8qd4,Uid:8a1c7885-17d4-45e1-bbd0-9b5b19862e2d,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:53:42.845517 systemd[1]: run-netns-cni\x2dab51c32a\x2da260\x2dea54\x2d5244\x2d2b2fac21c292.mount: Deactivated successfully. Jul 7 00:53:42.848092 systemd[1]: run-netns-cni\x2d08d06d25\x2de0a0\x2d9928\x2ddbed\x2d0e60f8d53b5d.mount: Deactivated successfully. Jul 7 00:53:42.877125 systemd-networkd[1200]: cali245dd2323f1: Link UP Jul 7 00:53:42.878330 systemd-networkd[1200]: cali245dd2323f1: Gained carrier Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.724 [INFO][4192] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.743 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0 calico-apiserver-667d8f9c7b- calico-apiserver 1946b93a-1ccd-4010-b1de-ece39cb252ae 911 0 2025-07-07 00:53:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667d8f9c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal calico-apiserver-667d8f9c7b-jbw72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali245dd2323f1 [] [] }} ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.743 [INFO][4192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.788 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" HandleID="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.788 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" HandleID="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"calico-apiserver-667d8f9c7b-jbw72", "timestamp":"2025-07-07 00:53:42.788225432 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.788 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.788 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.788 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.798 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.808 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.816 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.819 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.822 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.822 [INFO][4216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.824 [INFO][4216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2 Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.829 [INFO][4216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.847 [INFO][4216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.129/26] block=192.168.116.128/26 handle="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.847 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.129/26] handle="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.853 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:42.925120 containerd[1579]: 2025-07-07 00:53:42.854 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.129/26] IPv6=[] ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" HandleID="k8s-pod-network.a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.858 [INFO][4192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1946b93a-1ccd-4010-b1de-ece39cb252ae", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"calico-apiserver-667d8f9c7b-jbw72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245dd2323f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.859 [INFO][4192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.129/32] ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.859 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali245dd2323f1 ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.878 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.879 [INFO][4192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1946b93a-1ccd-4010-b1de-ece39cb252ae", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2", Pod:"calico-apiserver-667d8f9c7b-jbw72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245dd2323f1", MAC:"e2:dc:9f:92:5d:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:42.932984 containerd[1579]: 2025-07-07 00:53:42.899 [INFO][4192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-jbw72" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:43.039235 kubelet[2793]: I0707 00:53:43.037666 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55a1c0bf-8c72-4b6f-8b01-69f2e245ca44-whisker-backend-key-pair\") pod \"whisker-7496f4f948-dn6td\" (UID: \"55a1c0bf-8c72-4b6f-8b01-69f2e245ca44\") " pod="calico-system/whisker-7496f4f948-dn6td" Jul 7 00:53:43.039235 kubelet[2793]: I0707 00:53:43.037737 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a1c0bf-8c72-4b6f-8b01-69f2e245ca44-whisker-ca-bundle\") pod \"whisker-7496f4f948-dn6td\" (UID: \"55a1c0bf-8c72-4b6f-8b01-69f2e245ca44\") " pod="calico-system/whisker-7496f4f948-dn6td" Jul 7 00:53:43.039235 kubelet[2793]: I0707 00:53:43.037774 2793 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qlt\" (UniqueName: \"kubernetes.io/projected/55a1c0bf-8c72-4b6f-8b01-69f2e245ca44-kube-api-access-z5qlt\") pod \"whisker-7496f4f948-dn6td\" (UID: \"55a1c0bf-8c72-4b6f-8b01-69f2e245ca44\") " pod="calico-system/whisker-7496f4f948-dn6td" Jul 7 00:53:43.052312 systemd-networkd[1200]: cali5b24baed2c6: Link UP Jul 7 00:53:43.056175 systemd-networkd[1200]: cali5b24baed2c6: Gained carrier Jul 7 00:53:43.076230 containerd[1579]: time="2025-07-07T00:53:43.076008551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:43.082491 containerd[1579]: time="2025-07-07T00:53:43.080997234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:43.082491 containerd[1579]: time="2025-07-07T00:53:43.081090629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:43.082491 containerd[1579]: time="2025-07-07T00:53:43.081262483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.746 [INFO][4202] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.765 [INFO][4202] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0 calico-apiserver-667d8f9c7b- calico-apiserver 8a1c7885-17d4-45e1-bbd0-9b5b19862e2d 910 0 2025-07-07 00:53:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:667d8f9c7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal calico-apiserver-667d8f9c7b-s8qd4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b24baed2c6 [] [] }} ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.765 [INFO][4202] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.805 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" HandleID="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.805 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" HandleID="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"calico-apiserver-667d8f9c7b-s8qd4", "timestamp":"2025-07-07 00:53:42.805481312 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.805 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.852 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.852 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.901 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.943 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.957 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.975 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.989 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:42.989 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.000 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.010 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.032 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.130/26] block=192.168.116.128/26 handle="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.032 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.130/26] handle="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.033 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:43.092274 containerd[1579]: 2025-07-07 00:53:43.033 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.130/26] IPv6=[] ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" HandleID="k8s-pod-network.f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.043 [INFO][4202] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"calico-apiserver-667d8f9c7b-s8qd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b24baed2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.045 [INFO][4202] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.130/32] ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.046 [INFO][4202] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b24baed2c6 ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.058 [INFO][4202] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.058 [INFO][4202] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d", Pod:"calico-apiserver-667d8f9c7b-s8qd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b24baed2c6", MAC:"6e:75:55:37:23:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:43.096158 containerd[1579]: 2025-07-07 00:53:43.081 [INFO][4202] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d" Namespace="calico-apiserver" Pod="calico-apiserver-667d8f9c7b-s8qd4" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:43.185334 containerd[1579]: time="2025-07-07T00:53:43.185063749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:43.187170 containerd[1579]: time="2025-07-07T00:53:43.185141806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:43.187240 containerd[1579]: time="2025-07-07T00:53:43.187161365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:43.187317 containerd[1579]: time="2025-07-07T00:53:43.187264258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:43.202467 containerd[1579]: time="2025-07-07T00:53:43.200920515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-jbw72,Uid:1946b93a-1ccd-4010-b1de-ece39cb252ae,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2\"" Jul 7 00:53:43.206116 containerd[1579]: time="2025-07-07T00:53:43.205907535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:53:43.274881 containerd[1579]: time="2025-07-07T00:53:43.274767729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-667d8f9c7b-s8qd4,Uid:8a1c7885-17d4-45e1-bbd0-9b5b19862e2d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d\"" Jul 7 00:53:43.343404 containerd[1579]: time="2025-07-07T00:53:43.339981133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496f4f948-dn6td,Uid:55a1c0bf-8c72-4b6f-8b01-69f2e245ca44,Namespace:calico-system,Attempt:0,}" Jul 7 00:53:44.071381 kernel: bpftool[4478]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:53:44.292062 systemd-networkd[1200]: cali5b24baed2c6: Gained IPv6LL Jul 7 00:53:44.416830 containerd[1579]: time="2025-07-07T00:53:44.415843520Z" level=info msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" Jul 7 00:53:44.547503 systemd-networkd[1200]: cali245dd2323f1: Gained IPv6LL Jul 7 00:53:44.566074 systemd-networkd[1200]: calicd3adefd756: Link UP Jul 7 00:53:44.566369 systemd-networkd[1200]: calicd3adefd756: Gained carrier Jul 7 00:53:44.592550 kubelet[2793]: I0707 00:53:44.590845 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21eff24d-2230-403e-a20d-c63a9466fe87" path="/var/lib/kubelet/pods/21eff24d-2230-403e-a20d-c63a9466fe87/volumes" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.153 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0 whisker-7496f4f948- calico-system 55a1c0bf-8c72-4b6f-8b01-69f2e245ca44 932 0 2025-07-07 00:53:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7496f4f948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal whisker-7496f4f948-dn6td eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicd3adefd756 [] [] }} ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.154 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.326 [INFO][4486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" HandleID="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.326 [INFO][4486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" HandleID="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"whisker-7496f4f948-dn6td", "timestamp":"2025-07-07 00:53:44.326214564 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.326 [INFO][4486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.326 [INFO][4486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.327 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.335 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.341 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.346 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.348 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.351 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.351 [INFO][4486] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.353 [INFO][4486] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5 Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.478 [INFO][4486] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.544 [INFO][4486] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.131/26] block=192.168.116.128/26 handle="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.546 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.131/26] handle="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.547 [INFO][4486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:44.628280 containerd[1579]: 2025-07-07 00:53:44.547 [INFO][4486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.131/26] IPv6=[] ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" HandleID="k8s-pod-network.a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.555 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0", GenerateName:"whisker-7496f4f948-", Namespace:"calico-system", SelfLink:"", UID:"55a1c0bf-8c72-4b6f-8b01-69f2e245ca44", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7496f4f948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"whisker-7496f4f948-dn6td", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd3adefd756", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.555 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.131/32] ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.555 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd3adefd756 ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.572 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.576 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0", GenerateName:"whisker-7496f4f948-", Namespace:"calico-system", SelfLink:"", UID:"55a1c0bf-8c72-4b6f-8b01-69f2e245ca44", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7496f4f948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5", Pod:"whisker-7496f4f948-dn6td", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicd3adefd756", MAC:"c2:51:45:59:21:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:44.629371 containerd[1579]: 2025-07-07 00:53:44.622 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5" Namespace="calico-system" Pod="whisker-7496f4f948-dn6td" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--7496f4f948--dn6td-eth0" Jul 7 00:53:44.660271 systemd-networkd[1200]: vxlan.calico: Link UP Jul 7 00:53:44.660281 systemd-networkd[1200]: vxlan.calico: Gained carrier Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.598 [INFO][4517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.598 [INFO][4517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" iface="eth0" netns="/var/run/netns/cni-a1b781a0-4017-3327-bcca-fffa0d7b4473" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.599 [INFO][4517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" iface="eth0" netns="/var/run/netns/cni-a1b781a0-4017-3327-bcca-fffa0d7b4473" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.601 [INFO][4517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" iface="eth0" netns="/var/run/netns/cni-a1b781a0-4017-3327-bcca-fffa0d7b4473" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.601 [INFO][4517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.601 [INFO][4517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.671 [INFO][4528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.691 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.692 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.704 [WARNING][4528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.704 [INFO][4528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.707 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:44.769875 containerd[1579]: 2025-07-07 00:53:44.718 [INFO][4517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:44.772941 containerd[1579]: time="2025-07-07T00:53:44.772623632Z" level=info msg="TearDown network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" successfully" Jul 7 00:53:44.772941 containerd[1579]: time="2025-07-07T00:53:44.772658497Z" level=info msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" returns successfully" Jul 7 00:53:44.775791 systemd[1]: run-netns-cni\x2da1b781a0\x2d4017\x2d3327\x2dbcca\x2dfffa0d7b4473.mount: Deactivated successfully. Jul 7 00:53:44.790062 containerd[1579]: time="2025-07-07T00:53:44.790015513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ffzjz,Uid:278a2c58-53c9-4e5b-8c5e-0178026a9170,Namespace:calico-system,Attempt:1,}" Jul 7 00:53:44.826572 containerd[1579]: time="2025-07-07T00:53:44.825425512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:44.826572 containerd[1579]: time="2025-07-07T00:53:44.825542082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:44.826572 containerd[1579]: time="2025-07-07T00:53:44.825563231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:44.826572 containerd[1579]: time="2025-07-07T00:53:44.825694539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:45.080274 containerd[1579]: time="2025-07-07T00:53:45.079837326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7496f4f948-dn6td,Uid:55a1c0bf-8c72-4b6f-8b01-69f2e245ca44,Namespace:calico-system,Attempt:0,} returns sandbox id \"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5\"" Jul 7 00:53:45.211122 systemd-networkd[1200]: calib85898d538f: Link UP Jul 7 00:53:45.211680 systemd-networkd[1200]: calib85898d538f: Gained carrier Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.107 [INFO][4587] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0 goldmane-58fd7646b9- calico-system 278a2c58-53c9-4e5b-8c5e-0178026a9170 944 0 2025-07-07 00:53:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal goldmane-58fd7646b9-ffzjz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib85898d538f [] [] }} ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.107 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.146 [INFO][4644] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" HandleID="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.147 [INFO][4644] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" HandleID="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"goldmane-58fd7646b9-ffzjz", "timestamp":"2025-07-07 00:53:45.146914277 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.147 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.147 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.147 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.157 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.166 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.176 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.181 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.187 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.187 [INFO][4644] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.189 [INFO][4644] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130 Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.196 [INFO][4644] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.204 [INFO][4644] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.132/26] block=192.168.116.128/26 handle="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.204 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.132/26] handle="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.204 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:45.232503 containerd[1579]: 2025-07-07 00:53:45.204 [INFO][4644] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.132/26] IPv6=[] ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" HandleID="k8s-pod-network.ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.205 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"278a2c58-53c9-4e5b-8c5e-0178026a9170", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"goldmane-58fd7646b9-ffzjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85898d538f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.206 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.132/32] ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.206 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib85898d538f ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.212 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.212 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"278a2c58-53c9-4e5b-8c5e-0178026a9170", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130", Pod:"goldmane-58fd7646b9-ffzjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85898d538f", MAC:"da:96:3e:a9:30:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:45.233614 containerd[1579]: 2025-07-07 00:53:45.229 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130" Namespace="calico-system" Pod="goldmane-58fd7646b9-ffzjz" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:45.259426 containerd[1579]: time="2025-07-07T00:53:45.258644838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:45.259737 containerd[1579]: time="2025-07-07T00:53:45.259678833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:45.260064 containerd[1579]: time="2025-07-07T00:53:45.259704551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:45.260488 containerd[1579]: time="2025-07-07T00:53:45.260437149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:45.320806 containerd[1579]: time="2025-07-07T00:53:45.320741383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ffzjz,Uid:278a2c58-53c9-4e5b-8c5e-0178026a9170,Namespace:calico-system,Attempt:1,} returns sandbox id \"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130\"" Jul 7 00:53:45.827751 systemd-networkd[1200]: calicd3adefd756: Gained IPv6LL Jul 7 00:53:45.891814 systemd-networkd[1200]: vxlan.calico: Gained IPv6LL Jul 7 00:53:46.460116 containerd[1579]: time="2025-07-07T00:53:46.459487866Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.552 [WARNING][4725] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.552 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.552 [INFO][4725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" iface="eth0" netns="" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.552 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.552 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.601 [INFO][4733] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.601 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.601 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.611 [WARNING][4733] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.611 [INFO][4733] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.613 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:46.616859 containerd[1579]: 2025-07-07 00:53:46.615 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.616859 containerd[1579]: time="2025-07-07T00:53:46.616718168Z" level=info msg="TearDown network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" successfully" Jul 7 00:53:46.616859 containerd[1579]: time="2025-07-07T00:53:46.616748214Z" level=info msg="StopPodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" returns successfully" Jul 7 00:53:46.626411 containerd[1579]: time="2025-07-07T00:53:46.626042172Z" level=info msg="RemovePodSandbox for \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:46.626411 containerd[1579]: time="2025-07-07T00:53:46.626095523Z" level=info msg="Forcibly stopping sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\"" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.745 [WARNING][4747] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.745 [INFO][4747] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.745 [INFO][4747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" iface="eth0" netns="" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.745 [INFO][4747] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.745 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.778 [INFO][4754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.778 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.778 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.786 [WARNING][4754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.787 [INFO][4754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" HandleID="k8s-pod-network.e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-whisker--5f6cfcc6f6--xs6qr-eth0" Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.789 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:46.793420 containerd[1579]: 2025-07-07 00:53:46.791 [INFO][4747] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba" Jul 7 00:53:46.793420 containerd[1579]: time="2025-07-07T00:53:46.792518211Z" level=info msg="TearDown network for sandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" successfully" Jul 7 00:53:46.893972 containerd[1579]: time="2025-07-07T00:53:46.893917749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:53:46.894250 containerd[1579]: time="2025-07-07T00:53:46.894204028Z" level=info msg="RemovePodSandbox \"e1ad3fa94fa70fc15d7cc56ab69b7e55aca8f503d2ab0af8d1f5b3e76e3933ba\" returns successfully" Jul 7 00:53:46.896135 containerd[1579]: time="2025-07-07T00:53:46.896102929Z" level=info msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:46.977 [WARNING][4772] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d", Pod:"calico-apiserver-667d8f9c7b-s8qd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b24baed2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:46.979 [INFO][4772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:46.979 [INFO][4772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" iface="eth0" netns="" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:46.979 [INFO][4772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:46.979 [INFO][4772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.023 [INFO][4779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.023 [INFO][4779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.023 [INFO][4779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.034 [WARNING][4779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.034 [INFO][4779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.037 [INFO][4779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.041458 containerd[1579]: 2025-07-07 00:53:47.040 [INFO][4772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.042242 containerd[1579]: time="2025-07-07T00:53:47.041522043Z" level=info msg="TearDown network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" successfully" Jul 7 00:53:47.042242 containerd[1579]: time="2025-07-07T00:53:47.041565987Z" level=info msg="StopPodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" returns successfully" Jul 7 00:53:47.042242 containerd[1579]: time="2025-07-07T00:53:47.042061508Z" level=info msg="RemovePodSandbox for \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" Jul 7 00:53:47.042242 containerd[1579]: time="2025-07-07T00:53:47.042089541Z" level=info msg="Forcibly stopping sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\"" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.151 [WARNING][4793] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a1c7885-17d4-45e1-bbd0-9b5b19862e2d", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d", Pod:"calico-apiserver-667d8f9c7b-s8qd4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b24baed2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.151 [INFO][4793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.151 [INFO][4793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" iface="eth0" netns="" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.151 [INFO][4793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.151 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.192 [INFO][4800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.193 [INFO][4800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.194 [INFO][4800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.206 [WARNING][4800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.206 [INFO][4800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" HandleID="k8s-pod-network.862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--s8qd4-eth0" Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.209 [INFO][4800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.219228 containerd[1579]: 2025-07-07 00:53:47.212 [INFO][4793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6" Jul 7 00:53:47.219818 containerd[1579]: time="2025-07-07T00:53:47.219695277Z" level=info msg="TearDown network for sandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" successfully" Jul 7 00:53:47.233779 containerd[1579]: time="2025-07-07T00:53:47.233260938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:53:47.234654 containerd[1579]: time="2025-07-07T00:53:47.234434584Z" level=info msg="RemovePodSandbox \"862b225515bb015cacec2c3ecca286b9a5f7b8470dc03b1b3dceccb8d0441bd6\" returns successfully" Jul 7 00:53:47.235309 containerd[1579]: time="2025-07-07T00:53:47.235278321Z" level=info msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" Jul 7 00:53:47.236228 systemd-networkd[1200]: calib85898d538f: Gained IPv6LL Jul 7 00:53:47.306074 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:53:47.301145 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:53:47.301251 systemd-resolved[1470]: Flushed all caches. Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.332 [WARNING][4815] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"278a2c58-53c9-4e5b-8c5e-0178026a9170", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130", Pod:"goldmane-58fd7646b9-ffzjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85898d538f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.333 [INFO][4815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.333 [INFO][4815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" iface="eth0" netns="" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.333 [INFO][4815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.333 [INFO][4815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.375 [INFO][4822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.375 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.376 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.387 [WARNING][4822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.387 [INFO][4822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.391 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.395176 containerd[1579]: 2025-07-07 00:53:47.393 [INFO][4815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.395803 containerd[1579]: time="2025-07-07T00:53:47.395231733Z" level=info msg="TearDown network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" successfully" Jul 7 00:53:47.395803 containerd[1579]: time="2025-07-07T00:53:47.395466144Z" level=info msg="StopPodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" returns successfully" Jul 7 00:53:47.396212 containerd[1579]: time="2025-07-07T00:53:47.396176360Z" level=info msg="RemovePodSandbox for \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" Jul 7 00:53:47.396212 containerd[1579]: time="2025-07-07T00:53:47.396210785Z" level=info msg="Forcibly stopping sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\"" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.464 [WARNING][4836] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"278a2c58-53c9-4e5b-8c5e-0178026a9170", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130", Pod:"goldmane-58fd7646b9-ffzjz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib85898d538f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.464 [INFO][4836] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.464 [INFO][4836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" iface="eth0" netns="" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.464 [INFO][4836] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.464 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.502 [INFO][4843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.503 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.503 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.517 [WARNING][4843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.517 [INFO][4843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" HandleID="k8s-pod-network.0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-goldmane--58fd7646b9--ffzjz-eth0" Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.518 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.525406 containerd[1579]: 2025-07-07 00:53:47.521 [INFO][4836] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3" Jul 7 00:53:47.525406 containerd[1579]: time="2025-07-07T00:53:47.524559217Z" level=info msg="TearDown network for sandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" successfully" Jul 7 00:53:47.531407 containerd[1579]: time="2025-07-07T00:53:47.529728757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:53:47.531573 containerd[1579]: time="2025-07-07T00:53:47.531546726Z" level=info msg="RemovePodSandbox \"0377049a63861a89988263fd181b6706c3112eec900bfc19fa664fff1eff58a3\" returns successfully" Jul 7 00:53:47.532426 containerd[1579]: time="2025-07-07T00:53:47.532389911Z" level=info msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.616 [WARNING][4857] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1946b93a-1ccd-4010-b1de-ece39cb252ae", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2", Pod:"calico-apiserver-667d8f9c7b-jbw72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245dd2323f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.620 [INFO][4857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.620 [INFO][4857] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" iface="eth0" netns="" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.620 [INFO][4857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.620 [INFO][4857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.741 [INFO][4864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.741 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.741 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.758 [WARNING][4864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.758 [INFO][4864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.760 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.769520 containerd[1579]: 2025-07-07 00:53:47.766 [INFO][4857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.770075 containerd[1579]: time="2025-07-07T00:53:47.769585994Z" level=info msg="TearDown network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" successfully" Jul 7 00:53:47.770075 containerd[1579]: time="2025-07-07T00:53:47.769620260Z" level=info msg="StopPodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" returns successfully" Jul 7 00:53:47.771125 containerd[1579]: time="2025-07-07T00:53:47.771085454Z" level=info msg="RemovePodSandbox for \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" Jul 7 00:53:47.771195 containerd[1579]: time="2025-07-07T00:53:47.771180383Z" level=info msg="Forcibly stopping sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\"" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.831 [WARNING][4879] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0", GenerateName:"calico-apiserver-667d8f9c7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1946b93a-1ccd-4010-b1de-ece39cb252ae", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"667d8f9c7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2", Pod:"calico-apiserver-667d8f9c7b-jbw72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali245dd2323f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.831 [INFO][4879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.831 [INFO][4879] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" iface="eth0" netns="" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.831 [INFO][4879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.831 [INFO][4879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.867 [INFO][4886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.867 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.867 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.877 [WARNING][4886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.877 [INFO][4886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" HandleID="k8s-pod-network.148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--apiserver--667d8f9c7b--jbw72-eth0" Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.879 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:47.886433 containerd[1579]: 2025-07-07 00:53:47.881 [INFO][4879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d" Jul 7 00:53:47.886433 containerd[1579]: time="2025-07-07T00:53:47.884898433Z" level=info msg="TearDown network for sandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" successfully" Jul 7 00:53:47.891166 containerd[1579]: time="2025-07-07T00:53:47.891133127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:53:47.891327 containerd[1579]: time="2025-07-07T00:53:47.891304579Z" level=info msg="RemovePodSandbox \"148b75418188e02e5bb3fb6b651373dba42a7e15449aea16a73bff5e1e7cbf3d\" returns successfully" Jul 7 00:53:48.530528 containerd[1579]: time="2025-07-07T00:53:48.529880850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:48.533234 containerd[1579]: time="2025-07-07T00:53:48.533140490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:53:48.536091 containerd[1579]: time="2025-07-07T00:53:48.534977233Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:48.541225 containerd[1579]: time="2025-07-07T00:53:48.541182401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:48.542725 containerd[1579]: time="2025-07-07T00:53:48.542667904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.336681261s" Jul 7 00:53:48.542860 containerd[1579]: time="2025-07-07T00:53:48.542725983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:53:48.546520 containerd[1579]: time="2025-07-07T00:53:48.546463130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:53:48.551079 containerd[1579]: time="2025-07-07T00:53:48.550927745Z" level=info msg="CreateContainer within sandbox \"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:53:48.581603 containerd[1579]: time="2025-07-07T00:53:48.581492867Z" level=info msg="CreateContainer within sandbox \"a576b4511f326960d5f1cc4727af4a6d325b1edf53bb5821eb1645e3a3794cb2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"edf46155152b63c298760a0bccc7042f72201fd6114c10c53674fb0a5438892d\"" Jul 7 00:53:48.585752 containerd[1579]: time="2025-07-07T00:53:48.585707292Z" level=info msg="StartContainer for \"edf46155152b63c298760a0bccc7042f72201fd6114c10c53674fb0a5438892d\"" Jul 7 00:53:48.585760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205745984.mount: Deactivated successfully. Jul 7 00:53:48.655311 systemd[1]: run-containerd-runc-k8s.io-edf46155152b63c298760a0bccc7042f72201fd6114c10c53674fb0a5438892d-runc.ZOIVv0.mount: Deactivated successfully. Jul 7 00:53:49.104484 containerd[1579]: time="2025-07-07T00:53:49.104421028Z" level=info msg="StartContainer for \"edf46155152b63c298760a0bccc7042f72201fd6114c10c53674fb0a5438892d\" returns successfully" Jul 7 00:53:49.161413 containerd[1579]: time="2025-07-07T00:53:49.159936909Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:49.163598 containerd[1579]: time="2025-07-07T00:53:49.163248455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:53:49.170949 containerd[1579]: time="2025-07-07T00:53:49.170871158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 623.838607ms" Jul 7 00:53:49.171207 containerd[1579]: time="2025-07-07T00:53:49.171160813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:53:49.175115 containerd[1579]: time="2025-07-07T00:53:49.174157367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:53:49.178732 containerd[1579]: time="2025-07-07T00:53:49.178660513Z" level=info msg="CreateContainer within sandbox \"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:53:49.207838 containerd[1579]: time="2025-07-07T00:53:49.207772560Z" level=info msg="CreateContainer within sandbox \"f32315af2cfe780e8b4ec41ecc6a275d00c80732cd34482d9b9a1b013d633b1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40da4be2257ee1de9b09de9fbffbd6b3ae68804b2cb7c12ecf4eb668cca842b3\"" Jul 7 00:53:49.212217 containerd[1579]: time="2025-07-07T00:53:49.211043350Z" level=info msg="StartContainer for \"40da4be2257ee1de9b09de9fbffbd6b3ae68804b2cb7c12ecf4eb668cca842b3\"" Jul 7 00:53:49.313228 containerd[1579]: time="2025-07-07T00:53:49.313142617Z" level=info msg="StartContainer for \"40da4be2257ee1de9b09de9fbffbd6b3ae68804b2cb7c12ecf4eb668cca842b3\" returns successfully" Jul 7 00:53:50.151280 kubelet[2793]: I0707 00:53:50.151201 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667d8f9c7b-jbw72" podStartSLOduration=40.81023298 podStartE2EDuration="46.151177457s" podCreationTimestamp="2025-07-07 00:53:04 +0000 UTC" firstStartedPulling="2025-07-07 00:53:43.204737154 +0000 UTC m=+57.015415885" lastFinishedPulling="2025-07-07 00:53:48.545681631 +0000 UTC m=+62.356360362" observedRunningTime="2025-07-07 00:53:50.144789727 +0000 UTC m=+63.955468468" watchObservedRunningTime="2025-07-07 00:53:50.151177457 +0000 UTC m=+63.961856188" Jul 7 00:53:50.841949 kubelet[2793]: I0707 00:53:50.841859 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-667d8f9c7b-s8qd4" podStartSLOduration=40.945521283 podStartE2EDuration="46.841836389s" podCreationTimestamp="2025-07-07 00:53:04 +0000 UTC" firstStartedPulling="2025-07-07 00:53:43.276617107 +0000 UTC m=+57.087295848" lastFinishedPulling="2025-07-07 00:53:49.172932223 +0000 UTC m=+62.983610954" observedRunningTime="2025-07-07 00:53:50.170541413 +0000 UTC m=+63.981220174" watchObservedRunningTime="2025-07-07 00:53:50.841836389 +0000 UTC m=+64.652515120" Jul 7 00:53:51.989429 containerd[1579]: time="2025-07-07T00:53:51.988395526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:52.011657 containerd[1579]: time="2025-07-07T00:53:52.011413238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:53:52.024223 containerd[1579]: time="2025-07-07T00:53:52.023802508Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:52.046624 containerd[1579]: time="2025-07-07T00:53:52.045615654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:52.048927 containerd[1579]: time="2025-07-07T00:53:52.046310500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.872079325s" Jul 7 00:53:52.049165 containerd[1579]: time="2025-07-07T00:53:52.049121174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:53:52.056195 containerd[1579]: time="2025-07-07T00:53:52.056124299Z" level=info msg="CreateContainer within sandbox \"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:53:52.061165 containerd[1579]: time="2025-07-07T00:53:52.056298397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:53:52.294098 containerd[1579]: time="2025-07-07T00:53:52.293975911Z" level=info msg="CreateContainer within sandbox \"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"46ea5dacb5fb692bcbe7f12a470dc241ec58d529535f77b2c2166ec743ca7f06\"" Jul 7 00:53:52.295896 containerd[1579]: time="2025-07-07T00:53:52.295866525Z" level=info msg="StartContainer for \"46ea5dacb5fb692bcbe7f12a470dc241ec58d529535f77b2c2166ec743ca7f06\"" Jul 7 00:53:52.423326 containerd[1579]: time="2025-07-07T00:53:52.423048171Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:53:52.450549 containerd[1579]: time="2025-07-07T00:53:52.450459100Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:53:52.525021 containerd[1579]: time="2025-07-07T00:53:52.524055392Z" level=info msg="StartContainer for \"46ea5dacb5fb692bcbe7f12a470dc241ec58d529535f77b2c2166ec743ca7f06\" returns successfully" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.611 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.611 [INFO][5049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" iface="eth0" netns="/var/run/netns/cni-b68d5b66-cd39-8216-62ac-bca27ba18c30" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.613 [INFO][5049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" iface="eth0" netns="/var/run/netns/cni-b68d5b66-cd39-8216-62ac-bca27ba18c30" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.614 [INFO][5049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" iface="eth0" netns="/var/run/netns/cni-b68d5b66-cd39-8216-62ac-bca27ba18c30" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.614 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.614 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.669 [INFO][5063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.670 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.670 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.679 [WARNING][5063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.679 [INFO][5063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.682 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:52.687117 containerd[1579]: 2025-07-07 00:53:52.684 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:53:52.688783 containerd[1579]: time="2025-07-07T00:53:52.687963498Z" level=info msg="TearDown network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" successfully" Jul 7 00:53:52.688783 containerd[1579]: time="2025-07-07T00:53:52.688022138Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" returns successfully" Jul 7 00:53:52.690445 containerd[1579]: time="2025-07-07T00:53:52.689440846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zql2q,Uid:c53a8470-3943-407f-8401-5976894cd214,Namespace:calico-system,Attempt:1,}" Jul 7 00:53:52.696085 systemd[1]: run-netns-cni\x2db68d5b66\x2dcd39\x2d8216\x2d62ac\x2dbca27ba18c30.mount: Deactivated successfully. Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.617 [INFO][5045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.617 [INFO][5045] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" iface="eth0" netns="/var/run/netns/cni-fb455bf8-1c2c-5fac-257d-820024bf0bd5" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.618 [INFO][5045] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" iface="eth0" netns="/var/run/netns/cni-fb455bf8-1c2c-5fac-257d-820024bf0bd5" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.618 [INFO][5045] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" iface="eth0" netns="/var/run/netns/cni-fb455bf8-1c2c-5fac-257d-820024bf0bd5" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.618 [INFO][5045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.618 [INFO][5045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.676 [INFO][5065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.678 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.683 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.735 [WARNING][5065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.735 [INFO][5065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.738 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:52.744776 containerd[1579]: 2025-07-07 00:53:52.740 [INFO][5045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:53:52.747897 containerd[1579]: time="2025-07-07T00:53:52.747841853Z" level=info msg="TearDown network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" successfully" Jul 7 00:53:52.747897 containerd[1579]: time="2025-07-07T00:53:52.747898539Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" returns successfully" Jul 7 00:53:52.750600 containerd[1579]: time="2025-07-07T00:53:52.750539594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c5b8f57-jgbmg,Uid:9df690de-c33d-44aa-bf8e-790d93d78321,Namespace:calico-system,Attempt:1,}" Jul 7 00:53:53.014004 systemd-networkd[1200]: calieba9439eff3: Link UP Jul 7 00:53:53.014222 systemd-networkd[1200]: calieba9439eff3: Gained carrier Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.908 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0 csi-node-driver- calico-system c53a8470-3943-407f-8401-5976894cd214 1004 0 2025-07-07 00:53:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal csi-node-driver-zql2q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieba9439eff3 [] [] }} ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.908 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.952 [INFO][5102] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" HandleID="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.952 [INFO][5102] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" HandleID="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"csi-node-driver-zql2q", "timestamp":"2025-07-07 00:53:52.952378486 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.953 [INFO][5102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.953 [INFO][5102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.953 [INFO][5102] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.966 [INFO][5102] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.973 [INFO][5102] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.979 [INFO][5102] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.982 [INFO][5102] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.986 [INFO][5102] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.986 [INFO][5102] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.988 [INFO][5102] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:52.997 [INFO][5102] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:53.006 [INFO][5102] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.133/26] block=192.168.116.128/26 handle="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:53.007 [INFO][5102] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.133/26] handle="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:53.007 [INFO][5102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.039644 containerd[1579]: 2025-07-07 00:53:53.007 [INFO][5102] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.133/26] IPv6=[] ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" HandleID="k8s-pod-network.8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.011 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53a8470-3943-407f-8401-5976894cd214", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"csi-node-driver-zql2q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba9439eff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.011 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.133/32] ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.011 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieba9439eff3 ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.015 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.017 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53a8470-3943-407f-8401-5976894cd214", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a", Pod:"csi-node-driver-zql2q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba9439eff3", MAC:"6e:15:37:7c:c9:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.040968 containerd[1579]: 2025-07-07 00:53:53.035 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a" Namespace="calico-system" Pod="csi-node-driver-zql2q" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:53:53.077473 containerd[1579]: time="2025-07-07T00:53:53.076235204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:53.077473 containerd[1579]: time="2025-07-07T00:53:53.076497416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:53.077473 containerd[1579]: time="2025-07-07T00:53:53.077253598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.079064 containerd[1579]: time="2025-07-07T00:53:53.078822257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.150479 systemd-networkd[1200]: cali5c59b96cf56: Link UP Jul 7 00:53:53.152829 systemd-networkd[1200]: cali5c59b96cf56: Gained carrier Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:52.910 [INFO][5086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0 calico-kube-controllers-745c5b8f57- calico-system 9df690de-c33d-44aa-bf8e-790d93d78321 1005 0 2025-07-07 00:53:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:745c5b8f57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal calico-kube-controllers-745c5b8f57-jgbmg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c59b96cf56 [] [] }} ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:52.911 [INFO][5086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:52.966 [INFO][5107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" HandleID="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:52.966 [INFO][5107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" HandleID="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"calico-kube-controllers-745c5b8f57-jgbmg", "timestamp":"2025-07-07 00:53:52.966238992 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:52.966 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.007 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.007 [INFO][5107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.067 [INFO][5107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.075 [INFO][5107] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.084 [INFO][5107] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.089 [INFO][5107] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.099 [INFO][5107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.101 [INFO][5107] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.106 [INFO][5107] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564 Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.115 [INFO][5107] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.131 [INFO][5107] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.134/26] block=192.168.116.128/26 handle="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.131 [INFO][5107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.134/26] handle="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.131 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.201778 containerd[1579]: 2025-07-07 00:53:53.131 [INFO][5107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.134/26] IPv6=[] ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" HandleID="k8s-pod-network.30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.136 [INFO][5086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0", GenerateName:"calico-kube-controllers-745c5b8f57-", Namespace:"calico-system", SelfLink:"", UID:"9df690de-c33d-44aa-bf8e-790d93d78321", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c5b8f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"calico-kube-controllers-745c5b8f57-jgbmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c59b96cf56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.141 [INFO][5086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.134/32] ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.142 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c59b96cf56 ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.147 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.149 [INFO][5086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0", GenerateName:"calico-kube-controllers-745c5b8f57-", Namespace:"calico-system", SelfLink:"", UID:"9df690de-c33d-44aa-bf8e-790d93d78321", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c5b8f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564", Pod:"calico-kube-controllers-745c5b8f57-jgbmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c59b96cf56", MAC:"aa:14:d5:e8:7d:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.206668 containerd[1579]: 2025-07-07 00:53:53.190 [INFO][5086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564" Namespace="calico-system" Pod="calico-kube-controllers-745c5b8f57-jgbmg" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:53:53.204158 systemd[1]: run-netns-cni\x2dfb455bf8\x2d1c2c\x2d5fac\x2d257d\x2d820024bf0bd5.mount: Deactivated successfully. Jul 7 00:53:53.256593 containerd[1579]: time="2025-07-07T00:53:53.253622983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zql2q,Uid:c53a8470-3943-407f-8401-5976894cd214,Namespace:calico-system,Attempt:1,} returns sandbox id \"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a\"" Jul 7 00:53:53.300472 containerd[1579]: time="2025-07-07T00:53:53.299056467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:53.300472 containerd[1579]: time="2025-07-07T00:53:53.299138982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:53.301381 containerd[1579]: time="2025-07-07T00:53:53.300672114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.307402 containerd[1579]: time="2025-07-07T00:53:53.302839027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.418814 containerd[1579]: time="2025-07-07T00:53:53.417792144Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:53:53.418814 containerd[1579]: time="2025-07-07T00:53:53.418390738Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:53:53.454444 containerd[1579]: time="2025-07-07T00:53:53.454386402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745c5b8f57-jgbmg,Uid:9df690de-c33d-44aa-bf8e-790d93d78321,Namespace:calico-system,Attempt:1,} returns sandbox id \"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564\"" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.508 [INFO][5236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.509 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" iface="eth0" netns="/var/run/netns/cni-b55f0cdc-cc94-c8cf-a011-af3d9338d4d5" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.509 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" iface="eth0" netns="/var/run/netns/cni-b55f0cdc-cc94-c8cf-a011-af3d9338d4d5" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.510 [INFO][5236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" iface="eth0" netns="/var/run/netns/cni-b55f0cdc-cc94-c8cf-a011-af3d9338d4d5" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.510 [INFO][5236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.510 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.547 [INFO][5247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.547 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.547 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.558 [WARNING][5247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.558 [INFO][5247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.561 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.564783 containerd[1579]: 2025-07-07 00:53:53.562 [INFO][5236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:53:53.567644 containerd[1579]: time="2025-07-07T00:53:53.567595850Z" level=info msg="TearDown network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" successfully" Jul 7 00:53:53.567742 containerd[1579]: time="2025-07-07T00:53:53.567719664Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" returns successfully" Jul 7 00:53:53.569589 systemd[1]: run-netns-cni\x2db55f0cdc\x2dcc94\x2dc8cf\x2da011\x2daf3d9338d4d5.mount: Deactivated successfully. Jul 7 00:53:53.573813 containerd[1579]: time="2025-07-07T00:53:53.573412514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncwdh,Uid:36a72e2c-f519-4613-b65a-5c98b45d54b9,Namespace:kube-system,Attempt:1,}" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.521 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.522 [INFO][5226] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" iface="eth0" netns="/var/run/netns/cni-26064a2e-90d2-0973-200e-bdeefae2a92d" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.522 [INFO][5226] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" iface="eth0" netns="/var/run/netns/cni-26064a2e-90d2-0973-200e-bdeefae2a92d" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.522 [INFO][5226] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" iface="eth0" netns="/var/run/netns/cni-26064a2e-90d2-0973-200e-bdeefae2a92d" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.522 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.522 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.569 [INFO][5252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.571 [INFO][5252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.571 [INFO][5252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.584 [WARNING][5252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.585 [INFO][5252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.588 [INFO][5252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.601640 containerd[1579]: 2025-07-07 00:53:53.594 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:53:53.602177 containerd[1579]: time="2025-07-07T00:53:53.601812399Z" level=info msg="TearDown network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" successfully" Jul 7 00:53:53.602177 containerd[1579]: time="2025-07-07T00:53:53.601859228Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" returns successfully" Jul 7 00:53:53.603480 containerd[1579]: time="2025-07-07T00:53:53.603263067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-92wpl,Uid:a5b425c3-bad4-4558-89be-6136a807f762,Namespace:kube-system,Attempt:1,}" Jul 7 00:53:53.762757 systemd-networkd[1200]: calib8716f21191: Link UP Jul 7 00:53:53.763418 systemd-networkd[1200]: calib8716f21191: Gained carrier Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.645 [INFO][5262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0 coredns-7c65d6cfc9- kube-system 36a72e2c-f519-4613-b65a-5c98b45d54b9 1018 0 2025-07-07 00:52:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal coredns-7c65d6cfc9-ncwdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib8716f21191 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.646 [INFO][5262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.697 [INFO][5284] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" HandleID="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.697 [INFO][5284] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" HandleID="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"coredns-7c65d6cfc9-ncwdh", "timestamp":"2025-07-07 00:53:53.697310374 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.697 [INFO][5284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.697 [INFO][5284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.698 [INFO][5284] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.710 [INFO][5284] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.717 [INFO][5284] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.722 [INFO][5284] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.724 [INFO][5284] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.728 [INFO][5284] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.729 [INFO][5284] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.732 [INFO][5284] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.739 [INFO][5284] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.752 [INFO][5284] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.135/26] block=192.168.116.128/26 handle="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.752 [INFO][5284] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.135/26] handle="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.752 [INFO][5284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.789875 containerd[1579]: 2025-07-07 00:53:53.752 [INFO][5284] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.135/26] IPv6=[] ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" HandleID="k8s-pod-network.d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.755 [INFO][5262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36a72e2c-f519-4613-b65a-5c98b45d54b9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-ncwdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8716f21191", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.755 [INFO][5262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.135/32] ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.755 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8716f21191 ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.762 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.770 [INFO][5262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36a72e2c-f519-4613-b65a-5c98b45d54b9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b", Pod:"coredns-7c65d6cfc9-ncwdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8716f21191", MAC:"6a:9d:d0:13:65:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.790688 containerd[1579]: 2025-07-07 00:53:53.784 [INFO][5262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ncwdh" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:53:53.832952 containerd[1579]: time="2025-07-07T00:53:53.831046318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:53.832952 containerd[1579]: time="2025-07-07T00:53:53.831129463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:53.832952 containerd[1579]: time="2025-07-07T00:53:53.831322186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.833469 containerd[1579]: time="2025-07-07T00:53:53.833171172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.866531 systemd-networkd[1200]: cali5e1385c034e: Link UP Jul 7 00:53:53.869414 systemd-networkd[1200]: cali5e1385c034e: Gained carrier Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.676 [INFO][5271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0 coredns-7c65d6cfc9- kube-system a5b425c3-bad4-4558-89be-6136a807f762 1019 0 2025-07-07 00:52:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-7-8dfaddf5bb.novalocal coredns-7c65d6cfc9-92wpl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e1385c034e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.679 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.748 [INFO][5293] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" HandleID="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.748 [INFO][5293] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" HandleID="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003327b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-7-8dfaddf5bb.novalocal", "pod":"coredns-7c65d6cfc9-92wpl", "timestamp":"2025-07-07 00:53:53.748691901 +0000 UTC"}, Hostname:"ci-4081-3-4-7-8dfaddf5bb.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.748 [INFO][5293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.752 [INFO][5293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.754 [INFO][5293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-8dfaddf5bb.novalocal' Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.811 [INFO][5293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.821 [INFO][5293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.827 [INFO][5293] ipam/ipam.go 511: Trying affinity for 192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.829 [INFO][5293] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.833 [INFO][5293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.128/26 host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.833 [INFO][5293] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.116.128/26 handle="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.836 [INFO][5293] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.842 [INFO][5293] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.116.128/26 handle="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.853 [INFO][5293] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.116.136/26] block=192.168.116.128/26 handle="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.853 [INFO][5293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.136/26] handle="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" host="ci-4081-3-4-7-8dfaddf5bb.novalocal" Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.853 [INFO][5293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:53:53.901425 containerd[1579]: 2025-07-07 00:53:53.854 [INFO][5293] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.116.136/26] IPv6=[] ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" HandleID="k8s-pod-network.4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.859 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5b425c3-bad4-4558-89be-6136a807f762", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"", Pod:"coredns-7c65d6cfc9-92wpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e1385c034e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.860 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.136/32] ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.861 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e1385c034e ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.868 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.873 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5b425c3-bad4-4558-89be-6136a807f762", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d", Pod:"coredns-7c65d6cfc9-92wpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e1385c034e", MAC:"aa:39:ac:d7:30:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:53:53.902178 containerd[1579]: 2025-07-07 00:53:53.896 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-92wpl" WorkloadEndpoint="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:53:53.939486 containerd[1579]: time="2025-07-07T00:53:53.938317753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ncwdh,Uid:36a72e2c-f519-4613-b65a-5c98b45d54b9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b\"" Jul 7 00:53:53.943410 containerd[1579]: time="2025-07-07T00:53:53.943169223Z" level=info msg="CreateContainer within sandbox \"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:53:53.967255 containerd[1579]: time="2025-07-07T00:53:53.967120446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:53:53.967626 containerd[1579]: time="2025-07-07T00:53:53.967400202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:53:53.967828 containerd[1579]: time="2025-07-07T00:53:53.967611349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.967968 containerd[1579]: time="2025-07-07T00:53:53.967788462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:53:53.977249 containerd[1579]: time="2025-07-07T00:53:53.977057125Z" level=info msg="CreateContainer within sandbox \"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c37875a8491b1da52b189711ac2829643ebdb633852712018782c4539ddb58a6\"" Jul 7 00:53:53.980991 containerd[1579]: time="2025-07-07T00:53:53.978906852Z" level=info msg="StartContainer for \"c37875a8491b1da52b189711ac2829643ebdb633852712018782c4539ddb58a6\"" Jul 7 00:53:54.068291 containerd[1579]: time="2025-07-07T00:53:54.068224933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-92wpl,Uid:a5b425c3-bad4-4558-89be-6136a807f762,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d\"" Jul 7 00:53:54.075168 containerd[1579]: time="2025-07-07T00:53:54.075128179Z" level=info msg="CreateContainer within sandbox \"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:53:54.100479 containerd[1579]: time="2025-07-07T00:53:54.100301577Z" level=info msg="CreateContainer within sandbox \"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7614be319985ffc215366d3e79cec672a6efd7cbdadca528aeee9e10789a07e5\"" Jul 7 00:53:54.104638 containerd[1579]: time="2025-07-07T00:53:54.104492234Z" level=info msg="StartContainer for \"7614be319985ffc215366d3e79cec672a6efd7cbdadca528aeee9e10789a07e5\"" Jul 7 00:53:54.121740 containerd[1579]: time="2025-07-07T00:53:54.121688397Z" level=info msg="StartContainer for \"c37875a8491b1da52b189711ac2829643ebdb633852712018782c4539ddb58a6\" returns successfully" Jul 7 00:53:54.199822 systemd[1]: run-netns-cni\x2d26064a2e\x2d90d2\x2d0973\x2d200e\x2dbdeefae2a92d.mount: Deactivated successfully. Jul 7 00:53:54.299490 containerd[1579]: time="2025-07-07T00:53:54.297707606Z" level=info msg="StartContainer for \"7614be319985ffc215366d3e79cec672a6efd7cbdadca528aeee9e10789a07e5\" returns successfully" Jul 7 00:53:54.404428 systemd-networkd[1200]: cali5c59b96cf56: Gained IPv6LL Jul 7 00:53:54.468510 systemd-networkd[1200]: calieba9439eff3: Gained IPv6LL Jul 7 00:53:55.431870 kubelet[2793]: I0707 00:53:55.430798 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-92wpl" podStartSLOduration=64.430766847 podStartE2EDuration="1m4.430766847s" podCreationTimestamp="2025-07-07 00:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:53:55.428162903 +0000 UTC m=+69.238841664" watchObservedRunningTime="2025-07-07 00:53:55.430766847 +0000 UTC m=+69.241445589" Jul 7 00:53:55.431870 kubelet[2793]: I0707 00:53:55.430976 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ncwdh" podStartSLOduration=64.430969268 podStartE2EDuration="1m4.430969268s" podCreationTimestamp="2025-07-07 00:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:53:54.210712644 +0000 UTC m=+68.021391385" watchObservedRunningTime="2025-07-07 00:53:55.430969268 +0000 UTC m=+69.241648009" Jul 7 00:53:55.685178 systemd-networkd[1200]: cali5e1385c034e: Gained IPv6LL Jul 7 00:53:55.813018 systemd-networkd[1200]: calib8716f21191: Gained IPv6LL Jul 7 00:53:56.307267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154087137.mount: Deactivated successfully. Jul 7 00:53:57.079528 containerd[1579]: time="2025-07-07T00:53:57.079431344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:57.081765 containerd[1579]: time="2025-07-07T00:53:57.081475756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:53:57.083790 containerd[1579]: time="2025-07-07T00:53:57.083394141Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:57.086637 containerd[1579]: time="2025-07-07T00:53:57.086602882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:53:57.087592 containerd[1579]: time="2025-07-07T00:53:57.087546805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.031223142s" Jul 7 00:53:57.087688 containerd[1579]: time="2025-07-07T00:53:57.087669406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:53:57.090205 containerd[1579]: time="2025-07-07T00:53:57.090161620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:53:57.091417 containerd[1579]: time="2025-07-07T00:53:57.091381703Z" level=info msg="CreateContainer within sandbox \"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:53:57.115774 containerd[1579]: time="2025-07-07T00:53:57.115653028Z" level=info msg="CreateContainer within sandbox \"ecb41215854bc88ec125264286406da8f225feab15bb6a7bafc33ab8ae076130\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060\"" Jul 7 00:53:57.117403 containerd[1579]: time="2025-07-07T00:53:57.116488328Z" level=info msg="StartContainer for \"9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060\"" Jul 7 00:53:57.238663 containerd[1579]: time="2025-07-07T00:53:57.237896963Z" level=info msg="StartContainer for \"9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060\" returns successfully" Jul 7 00:54:02.225515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013929476.mount: Deactivated successfully. Jul 7 00:54:02.265405 containerd[1579]: time="2025-07-07T00:54:02.265230091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:02.268228 containerd[1579]: time="2025-07-07T00:54:02.268123247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:54:02.271404 containerd[1579]: time="2025-07-07T00:54:02.270306359Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:02.274417 containerd[1579]: time="2025-07-07T00:54:02.274376507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:02.277765 containerd[1579]: time="2025-07-07T00:54:02.277693329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.187279945s" Jul 7 00:54:02.277846 containerd[1579]: time="2025-07-07T00:54:02.277790312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:54:02.282073 containerd[1579]: time="2025-07-07T00:54:02.281022685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:54:02.288376 containerd[1579]: time="2025-07-07T00:54:02.288310469Z" level=info msg="CreateContainer within sandbox \"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:54:02.322462 containerd[1579]: time="2025-07-07T00:54:02.322397809Z" level=info msg="CreateContainer within sandbox \"a95f0364683b83002c6b6ff91cfa16d8fa920b1ddb667ffda142a5723a5b10f5\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"33a973b453e1d0519ae1353dae2bdc0116eb2212f66dcf26aa771676cbec2abd\"" Jul 7 00:54:02.325390 containerd[1579]: time="2025-07-07T00:54:02.324511299Z" level=info msg="StartContainer for \"33a973b453e1d0519ae1353dae2bdc0116eb2212f66dcf26aa771676cbec2abd\"" Jul 7 00:54:02.547042 containerd[1579]: time="2025-07-07T00:54:02.546144705Z" level=info msg="StartContainer for \"33a973b453e1d0519ae1353dae2bdc0116eb2212f66dcf26aa771676cbec2abd\" returns successfully" Jul 7 00:54:03.473966 kubelet[2793]: I0707 00:54:03.473575 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ffzjz" podStartSLOduration=43.709490342 podStartE2EDuration="55.473509193s" podCreationTimestamp="2025-07-07 00:53:08 +0000 UTC" firstStartedPulling="2025-07-07 00:53:45.324900925 +0000 UTC m=+59.135579656" lastFinishedPulling="2025-07-07 00:53:57.088919766 +0000 UTC m=+70.899598507" observedRunningTime="2025-07-07 00:53:57.455680945 +0000 UTC m=+71.266359676" watchObservedRunningTime="2025-07-07 00:54:03.473509193 +0000 UTC m=+77.284187924" Jul 7 00:54:03.473966 kubelet[2793]: I0707 00:54:03.473709 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7496f4f948-dn6td" podStartSLOduration=4.275574897 podStartE2EDuration="21.473702275s" podCreationTimestamp="2025-07-07 00:53:42 +0000 UTC" firstStartedPulling="2025-07-07 00:53:45.082592348 +0000 UTC m=+58.893271079" lastFinishedPulling="2025-07-07 00:54:02.280719726 +0000 UTC m=+76.091398457" observedRunningTime="2025-07-07 00:54:03.471142687 +0000 UTC m=+77.281821428" watchObservedRunningTime="2025-07-07 00:54:03.473702275 +0000 UTC m=+77.284381006" Jul 7 00:54:05.547526 containerd[1579]: time="2025-07-07T00:54:05.547417369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:05.549590 containerd[1579]: time="2025-07-07T00:54:05.549526581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:54:05.552515 containerd[1579]: time="2025-07-07T00:54:05.552464231Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:05.557046 containerd[1579]: time="2025-07-07T00:54:05.556974865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:05.559586 containerd[1579]: time="2025-07-07T00:54:05.559397336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.27833143s" Jul 7 00:54:05.559586 containerd[1579]: time="2025-07-07T00:54:05.559464743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:54:05.564131 containerd[1579]: time="2025-07-07T00:54:05.563711742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:54:05.568138 containerd[1579]: time="2025-07-07T00:54:05.567544884Z" level=info msg="CreateContainer within sandbox \"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:54:05.618429 containerd[1579]: time="2025-07-07T00:54:05.617986491Z" level=info msg="CreateContainer within sandbox \"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7907156d2e38ead5e11fef752ce29be0e949ce270d9e0c322d66d4cb3ee966a\"" Jul 7 00:54:05.620562 containerd[1579]: time="2025-07-07T00:54:05.619612317Z" level=info msg="StartContainer for \"f7907156d2e38ead5e11fef752ce29be0e949ce270d9e0c322d66d4cb3ee966a\"" Jul 7 00:54:05.777379 containerd[1579]: time="2025-07-07T00:54:05.775946912Z" level=info msg="StartContainer for \"f7907156d2e38ead5e11fef752ce29be0e949ce270d9e0c322d66d4cb3ee966a\" returns successfully" Jul 7 00:54:11.553029 containerd[1579]: time="2025-07-07T00:54:11.551987324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:11.553781 containerd[1579]: time="2025-07-07T00:54:11.553715651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:54:11.556043 containerd[1579]: time="2025-07-07T00:54:11.556016433Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:11.562173 containerd[1579]: time="2025-07-07T00:54:11.562126279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:11.564831 containerd[1579]: time="2025-07-07T00:54:11.564790724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.001032484s" Jul 7 00:54:11.564953 containerd[1579]: time="2025-07-07T00:54:11.564931709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:54:11.569622 containerd[1579]: time="2025-07-07T00:54:11.569597713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:54:11.604502 containerd[1579]: time="2025-07-07T00:54:11.604438037Z" level=info msg="CreateContainer within sandbox \"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:54:11.664327 containerd[1579]: time="2025-07-07T00:54:11.664249066Z" level=info msg="CreateContainer within sandbox \"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b512b139420bd54ca164be970607de3beb493a7548b2225444ae7b46daecf6ba\"" Jul 7 00:54:11.667745 containerd[1579]: time="2025-07-07T00:54:11.667695559Z" level=info msg="StartContainer for \"b512b139420bd54ca164be970607de3beb493a7548b2225444ae7b46daecf6ba\"" Jul 7 00:54:11.852781 containerd[1579]: time="2025-07-07T00:54:11.852613638Z" level=info msg="StartContainer for \"b512b139420bd54ca164be970607de3beb493a7548b2225444ae7b46daecf6ba\" returns successfully" Jul 7 00:54:12.633496 kubelet[2793]: I0707 00:54:12.632719 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-745c5b8f57-jgbmg" podStartSLOduration=45.522240569 podStartE2EDuration="1m3.632691784s" podCreationTimestamp="2025-07-07 00:53:09 +0000 UTC" firstStartedPulling="2025-07-07 00:53:53.457826509 +0000 UTC m=+67.268505240" lastFinishedPulling="2025-07-07 00:54:11.568277714 +0000 UTC m=+85.378956455" observedRunningTime="2025-07-07 00:54:12.555203956 +0000 UTC m=+86.365882688" watchObservedRunningTime="2025-07-07 00:54:12.632691784 +0000 UTC m=+86.443370515" Jul 7 00:54:14.849444 containerd[1579]: time="2025-07-07T00:54:14.848994056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:14.855385 containerd[1579]: time="2025-07-07T00:54:14.854323937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:54:14.856228 containerd[1579]: time="2025-07-07T00:54:14.856188439Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:14.866145 containerd[1579]: time="2025-07-07T00:54:14.866077062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.295617479s" Jul 7 00:54:14.867480 containerd[1579]: time="2025-07-07T00:54:14.867425325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:54:14.870611 containerd[1579]: time="2025-07-07T00:54:14.870383601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:54:14.876670 containerd[1579]: time="2025-07-07T00:54:14.876612200Z" level=info msg="CreateContainer within sandbox \"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:54:14.925819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445768870.mount: Deactivated successfully. Jul 7 00:54:14.928081 containerd[1579]: time="2025-07-07T00:54:14.928024554Z" level=info msg="CreateContainer within sandbox \"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d6f644e36fbec24ba3c3ba930dce4fa950292448fbf7a9c47f377135db0bb724\"" Jul 7 00:54:14.930769 containerd[1579]: time="2025-07-07T00:54:14.930731358Z" level=info msg="StartContainer for \"d6f644e36fbec24ba3c3ba930dce4fa950292448fbf7a9c47f377135db0bb724\"" Jul 7 00:54:14.991471 systemd[1]: run-containerd-runc-k8s.io-d6f644e36fbec24ba3c3ba930dce4fa950292448fbf7a9c47f377135db0bb724-runc.6PWpYL.mount: Deactivated successfully. Jul 7 00:54:15.091178 containerd[1579]: time="2025-07-07T00:54:15.091106464Z" level=info msg="StartContainer for \"d6f644e36fbec24ba3c3ba930dce4fa950292448fbf7a9c47f377135db0bb724\" returns successfully" Jul 7 00:54:15.592420 kubelet[2793]: I0707 00:54:15.590853 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zql2q" podStartSLOduration=45.980095814 podStartE2EDuration="1m7.590830955s" podCreationTimestamp="2025-07-07 00:53:08 +0000 UTC" firstStartedPulling="2025-07-07 00:53:53.260529625 +0000 UTC m=+67.071208356" lastFinishedPulling="2025-07-07 00:54:14.871264766 +0000 UTC m=+88.681943497" observedRunningTime="2025-07-07 00:54:15.589808194 +0000 UTC m=+89.400486935" watchObservedRunningTime="2025-07-07 00:54:15.590830955 +0000 UTC m=+89.401509696" Jul 7 00:54:15.856781 kubelet[2793]: I0707 00:54:15.856627 2793 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:54:15.856781 kubelet[2793]: I0707 00:54:15.856720 2793 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:54:25.874876 systemd[1]: run-containerd-runc-k8s.io-fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714-runc.XRpzIN.mount: Deactivated successfully. Jul 7 00:54:47.900576 containerd[1579]: time="2025-07-07T00:54:47.898427200Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.027 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36a72e2c-f519-4613-b65a-5c98b45d54b9", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b", Pod:"coredns-7c65d6cfc9-ncwdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8716f21191", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.032 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.032 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" iface="eth0" netns="" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.032 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.032 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.131 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.131 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.131 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.147 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.149 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.155 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.165742 containerd[1579]: 2025-07-07 00:54:48.161 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.166500 containerd[1579]: time="2025-07-07T00:54:48.166402050Z" level=info msg="TearDown network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" successfully" Jul 7 00:54:48.166500 containerd[1579]: time="2025-07-07T00:54:48.166461762Z" level=info msg="StopPodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" returns successfully" Jul 7 00:54:48.169986 containerd[1579]: time="2025-07-07T00:54:48.167257596Z" level=info msg="RemovePodSandbox for \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:54:48.169986 containerd[1579]: time="2025-07-07T00:54:48.167298794Z" level=info msg="Forcibly stopping sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\"" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.260 [WARNING][5939] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"36a72e2c-f519-4613-b65a-5c98b45d54b9", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"d85f4ec72ba26ec3674dc719efa426bbd4d1b1cd17a9efd137963e526559413b", Pod:"coredns-7c65d6cfc9-ncwdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8716f21191", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.261 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.261 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" iface="eth0" netns="" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.261 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.261 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.318 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.318 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.318 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.332 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.332 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" HandleID="k8s-pod-network.6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--ncwdh-eth0" Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.334 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.340513 containerd[1579]: 2025-07-07 00:54:48.336 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c" Jul 7 00:54:48.341520 containerd[1579]: time="2025-07-07T00:54:48.340766689Z" level=info msg="TearDown network for sandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" successfully" Jul 7 00:54:48.352006 containerd[1579]: time="2025-07-07T00:54:48.351948329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:54:48.352174 containerd[1579]: time="2025-07-07T00:54:48.352109561Z" level=info msg="RemovePodSandbox \"6d7629385156351cf2979b63928ad7a9731ba515dc4943ab76c997ac9251fb6c\" returns successfully" Jul 7 00:54:48.352841 containerd[1579]: time="2025-07-07T00:54:48.352812491Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.431 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5b425c3-bad4-4558-89be-6136a807f762", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d", Pod:"coredns-7c65d6cfc9-92wpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e1385c034e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.431 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.431 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" iface="eth0" netns="" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.431 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.432 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.477 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.477 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.477 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.487 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.489 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.491 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.496010 containerd[1579]: 2025-07-07 00:54:48.494 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.498840 containerd[1579]: time="2025-07-07T00:54:48.498451337Z" level=info msg="TearDown network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" successfully" Jul 7 00:54:48.498840 containerd[1579]: time="2025-07-07T00:54:48.498498235Z" level=info msg="StopPodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" returns successfully" Jul 7 00:54:48.499445 containerd[1579]: time="2025-07-07T00:54:48.499415858Z" level=info msg="RemovePodSandbox for \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:54:48.499819 containerd[1579]: time="2025-07-07T00:54:48.499618027Z" level=info msg="Forcibly stopping sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\"" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.560 [WARNING][5981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a5b425c3-bad4-4558-89be-6136a807f762", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"4a544d8e7d6c4924ce94c34bb2bd871ae1e32303db0fcb4930a3e9d5f1ae5f5d", Pod:"coredns-7c65d6cfc9-92wpl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e1385c034e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.561 [INFO][5981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.561 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" iface="eth0" netns="" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.561 [INFO][5981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.561 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.592 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.592 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.592 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.600 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.600 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" HandleID="k8s-pod-network.203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-coredns--7c65d6cfc9--92wpl-eth0" Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.602 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.608890 containerd[1579]: 2025-07-07 00:54:48.607 [INFO][5981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0" Jul 7 00:54:48.611332 containerd[1579]: time="2025-07-07T00:54:48.608943838Z" level=info msg="TearDown network for sandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" successfully" Jul 7 00:54:48.615155 containerd[1579]: time="2025-07-07T00:54:48.615057454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:54:48.615396 containerd[1579]: time="2025-07-07T00:54:48.615324506Z" level=info msg="RemovePodSandbox \"203b493bf1ddda23e01f613e8f5cb81a7179a8e4475c5ddedbf03c9ae98308f0\" returns successfully" Jul 7 00:54:48.616247 containerd[1579]: time="2025-07-07T00:54:48.616209296Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.673 [WARNING][6003] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0", GenerateName:"calico-kube-controllers-745c5b8f57-", Namespace:"calico-system", SelfLink:"", UID:"9df690de-c33d-44aa-bf8e-790d93d78321", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c5b8f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564", Pod:"calico-kube-controllers-745c5b8f57-jgbmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c59b96cf56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.674 [INFO][6003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.674 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" iface="eth0" netns="" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.674 [INFO][6003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.675 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.719 [INFO][6010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.719 [INFO][6010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.719 [INFO][6010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.730 [WARNING][6010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.730 [INFO][6010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.732 [INFO][6010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.737007 containerd[1579]: 2025-07-07 00:54:48.733 [INFO][6003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.739976 containerd[1579]: time="2025-07-07T00:54:48.737053710Z" level=info msg="TearDown network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" successfully" Jul 7 00:54:48.739976 containerd[1579]: time="2025-07-07T00:54:48.737086331Z" level=info msg="StopPodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" returns successfully" Jul 7 00:54:48.740724 containerd[1579]: time="2025-07-07T00:54:48.740177353Z" level=info msg="RemovePodSandbox for \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:54:48.740724 containerd[1579]: time="2025-07-07T00:54:48.740224111Z" level=info msg="Forcibly stopping sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\"" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.840 [WARNING][6024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0", GenerateName:"calico-kube-controllers-745c5b8f57-", Namespace:"calico-system", SelfLink:"", UID:"9df690de-c33d-44aa-bf8e-790d93d78321", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745c5b8f57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"30835ab0e4213ff64c1334809864f6af44e8af18d51ae8e9cfe2e77055fa3564", Pod:"calico-kube-controllers-745c5b8f57-jgbmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c59b96cf56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.841 [INFO][6024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.841 [INFO][6024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" iface="eth0" netns="" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.841 [INFO][6024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.841 [INFO][6024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.889 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.891 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.892 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.902 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.903 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" HandleID="k8s-pod-network.be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-calico--kube--controllers--745c5b8f57--jgbmg-eth0" Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.904 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:48.911648 containerd[1579]: 2025-07-07 00:54:48.909 [INFO][6024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210" Jul 7 00:54:48.916643 containerd[1579]: time="2025-07-07T00:54:48.914203857Z" level=info msg="TearDown network for sandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" successfully" Jul 7 00:54:48.921013 containerd[1579]: time="2025-07-07T00:54:48.920976671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:54:48.921244 containerd[1579]: time="2025-07-07T00:54:48.921221511Z" level=info msg="RemovePodSandbox \"be242c6bbccd72aaf1485e1939c8cc772f721ca9f8b2a367603ddf9da00b6210\" returns successfully" Jul 7 00:54:48.922215 containerd[1579]: time="2025-07-07T00:54:48.921876550Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:48.967 [WARNING][6048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53a8470-3943-407f-8401-5976894cd214", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a", Pod:"csi-node-driver-zql2q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba9439eff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:48.967 [INFO][6048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:48.967 [INFO][6048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" iface="eth0" netns="" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:48.967 [INFO][6048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:48.967 [INFO][6048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.005 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.005 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.005 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.020 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.020 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.023 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:49.029266 containerd[1579]: 2025-07-07 00:54:49.025 [INFO][6048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.030375 containerd[1579]: time="2025-07-07T00:54:49.029966771Z" level=info msg="TearDown network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" successfully" Jul 7 00:54:49.030375 containerd[1579]: time="2025-07-07T00:54:49.029997218Z" level=info msg="StopPodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" returns successfully" Jul 7 00:54:49.031106 containerd[1579]: time="2025-07-07T00:54:49.030672907Z" level=info msg="RemovePodSandbox for \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:54:49.031106 containerd[1579]: time="2025-07-07T00:54:49.030712821Z" level=info msg="Forcibly stopping sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\"" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.092 [WARNING][6069] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53a8470-3943-407f-8401-5976894cd214", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 53, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-8dfaddf5bb.novalocal", ContainerID:"8cdd9eda42f4de598affdd62ad8d7f550100d469b954474d8f6916e1db33391a", Pod:"csi-node-driver-zql2q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieba9439eff3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.092 [INFO][6069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.092 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" iface="eth0" netns="" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.092 [INFO][6069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.092 [INFO][6069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.126 [INFO][6076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.126 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.127 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.136 [WARNING][6076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.137 [INFO][6076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" HandleID="k8s-pod-network.3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Workload="ci--4081--3--4--7--8dfaddf5bb.novalocal-k8s-csi--node--driver--zql2q-eth0" Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.138 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:54:49.145245 containerd[1579]: 2025-07-07 00:54:49.141 [INFO][6069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72" Jul 7 00:54:49.145245 containerd[1579]: time="2025-07-07T00:54:49.144577351Z" level=info msg="TearDown network for sandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" successfully" Jul 7 00:54:49.159454 containerd[1579]: time="2025-07-07T00:54:49.159389325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:54:49.159673 containerd[1579]: time="2025-07-07T00:54:49.159498369Z" level=info msg="RemovePodSandbox \"3bbf82d42ff695856e35223e16e6b65f5c0ad499833b9670d1b480d4156f3f72\" returns successfully" Jul 7 00:54:51.379181 update_engine[1562]: I20250707 00:54:51.379040 1562 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 00:54:51.379181 update_engine[1562]: I20250707 00:54:51.379184 1562 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 00:54:51.380165 update_engine[1562]: I20250707 00:54:51.379828 1562 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 00:54:51.384523 update_engine[1562]: I20250707 00:54:51.383648 1562 omaha_request_params.cc:62] Current group set to lts Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389271 1562 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389302 1562 update_attempter.cc:643] Scheduling an action processor start. Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389355 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389453 1562 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389561 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389572 1562 omaha_request_action.cc:272] Request: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: Jul 7 00:54:51.390539 update_engine[1562]: I20250707 00:54:51.389585 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:54:51.395371 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 00:54:51.404791 update_engine[1562]: I20250707 00:54:51.404732 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:54:51.405215 update_engine[1562]: I20250707 00:54:51.405169 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:54:51.418408 update_engine[1562]: E20250707 00:54:51.418313 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:54:51.418594 update_engine[1562]: I20250707 00:54:51.418466 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 00:55:01.288085 update_engine[1562]: I20250707 00:55:01.287927 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:55:01.289259 update_engine[1562]: I20250707 00:55:01.288631 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:55:01.289487 update_engine[1562]: I20250707 00:55:01.289335 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:55:01.300222 update_engine[1562]: E20250707 00:55:01.300125 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:55:01.300482 update_engine[1562]: I20250707 00:55:01.300284 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 00:55:11.288907 update_engine[1562]: I20250707 00:55:11.288686 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:55:11.289811 update_engine[1562]: I20250707 00:55:11.289314 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:55:11.290012 update_engine[1562]: I20250707 00:55:11.289912 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:55:11.300753 update_engine[1562]: E20250707 00:55:11.300649 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:55:11.300871 update_engine[1562]: I20250707 00:55:11.300774 1562 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 00:55:17.406850 systemd[1]: Started sshd@9-172.24.4.161:22-172.24.4.1:54260.service - OpenSSH per-connection server daemon (172.24.4.1:54260). Jul 7 00:55:18.830523 sshd[6187]: Accepted publickey for core from 172.24.4.1 port 54260 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:18.839749 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:18.863006 systemd-logind[1555]: New session 12 of user core. Jul 7 00:55:18.870290 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:55:19.776020 sshd[6187]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:19.786828 systemd[1]: sshd@9-172.24.4.161:22-172.24.4.1:54260.service: Deactivated successfully. Jul 7 00:55:19.797002 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:55:19.802126 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:55:19.806526 systemd-logind[1555]: Removed session 12. Jul 7 00:55:21.280583 update_engine[1562]: I20250707 00:55:21.279727 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:55:21.280583 update_engine[1562]: I20250707 00:55:21.280200 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:55:21.282200 update_engine[1562]: I20250707 00:55:21.282117 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:55:21.294462 update_engine[1562]: E20250707 00:55:21.292449 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292539 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292560 1562 omaha_request_action.cc:617] Omaha request response: Jul 7 00:55:21.294462 update_engine[1562]: E20250707 00:55:21.292662 1562 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292708 1562 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292728 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292740 1562 update_attempter.cc:306] Processing Done. Jul 7 00:55:21.294462 update_engine[1562]: E20250707 00:55:21.292766 1562 update_attempter.cc:619] Update failed. Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292779 1562 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292784 1562 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292790 1562 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292884 1562 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292914 1562 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 00:55:21.294462 update_engine[1562]: I20250707 00:55:21.292920 1562 omaha_request_action.cc:272] Request: Jul 7 00:55:21.294462 update_engine[1562]: Jul 7 00:55:21.294462 update_engine[1562]: Jul 7 00:55:21.294462 update_engine[1562]: Jul 7 00:55:21.295102 update_engine[1562]: Jul 7 00:55:21.295102 update_engine[1562]: Jul 7 00:55:21.295102 update_engine[1562]: Jul 7 00:55:21.295102 update_engine[1562]: I20250707 00:55:21.292927 1562 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 00:55:21.295102 update_engine[1562]: I20250707 00:55:21.293087 1562 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 00:55:21.295249 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 00:55:21.297955 update_engine[1562]: I20250707 00:55:21.297489 1562 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 00:55:21.308185 update_engine[1562]: E20250707 00:55:21.307909 1562 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308003 1562 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308013 1562 omaha_request_action.cc:617] Omaha request response: Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308022 1562 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308029 1562 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308034 1562 update_attempter.cc:306] Processing Done. Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308041 1562 update_attempter.cc:310] Error event sent. Jul 7 00:55:21.308185 update_engine[1562]: I20250707 00:55:21.308063 1562 update_check_scheduler.cc:74] Next update check in 49m3s Jul 7 00:55:21.308699 locksmithd[1594]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 00:55:24.789189 systemd[1]: Started sshd@10-172.24.4.161:22-172.24.4.1:55892.service - OpenSSH per-connection server daemon (172.24.4.1:55892). Jul 7 00:55:26.124684 sshd[6217]: Accepted publickey for core from 172.24.4.1 port 55892 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:26.125199 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:26.139823 systemd-logind[1555]: New session 13 of user core. Jul 7 00:55:26.143829 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:55:26.996747 sshd[6217]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:27.003170 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:55:27.003674 systemd[1]: sshd@10-172.24.4.161:22-172.24.4.1:55892.service: Deactivated successfully. Jul 7 00:55:27.009608 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:55:27.010807 systemd-logind[1555]: Removed session 13. Jul 7 00:55:27.326556 systemd[1]: run-containerd-runc-k8s.io-9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060-runc.SH3uGd.mount: Deactivated successfully. Jul 7 00:55:32.020188 systemd[1]: Started sshd@11-172.24.4.161:22-172.24.4.1:55902.service - OpenSSH per-connection server daemon (172.24.4.1:55902). Jul 7 00:55:33.325983 sshd[6289]: Accepted publickey for core from 172.24.4.1 port 55902 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:33.334613 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:33.347244 systemd-logind[1555]: New session 14 of user core. Jul 7 00:55:33.355923 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:55:34.136125 sshd[6289]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:34.149507 systemd[1]: Started sshd@12-172.24.4.161:22-172.24.4.1:45886.service - OpenSSH per-connection server daemon (172.24.4.1:45886). Jul 7 00:55:34.150752 systemd[1]: sshd@11-172.24.4.161:22-172.24.4.1:55902.service: Deactivated successfully. Jul 7 00:55:34.161339 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:55:34.165968 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:55:34.174548 systemd-logind[1555]: Removed session 14. Jul 7 00:55:35.238874 sshd[6302]: Accepted publickey for core from 172.24.4.1 port 45886 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:35.244601 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:35.257272 systemd-logind[1555]: New session 15 of user core. Jul 7 00:55:35.264981 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:55:36.083005 sshd[6302]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:36.094212 systemd[1]: Started sshd@13-172.24.4.161:22-172.24.4.1:45898.service - OpenSSH per-connection server daemon (172.24.4.1:45898). Jul 7 00:55:36.097126 systemd[1]: sshd@12-172.24.4.161:22-172.24.4.1:45886.service: Deactivated successfully. Jul 7 00:55:36.107695 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:55:36.110567 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:55:36.112637 systemd-logind[1555]: Removed session 15. Jul 7 00:55:37.451706 sshd[6313]: Accepted publickey for core from 172.24.4.1 port 45898 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:37.477589 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:37.494036 systemd-logind[1555]: New session 16 of user core. Jul 7 00:55:37.506971 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:55:38.501720 sshd[6313]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:38.513250 systemd[1]: sshd@13-172.24.4.161:22-172.24.4.1:45898.service: Deactivated successfully. Jul 7 00:55:38.529612 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:55:38.530236 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:55:38.538446 systemd-logind[1555]: Removed session 16. Jul 7 00:55:43.522090 systemd[1]: Started sshd@14-172.24.4.161:22-172.24.4.1:41190.service - OpenSSH per-connection server daemon (172.24.4.1:41190). Jul 7 00:55:44.653469 sshd[6330]: Accepted publickey for core from 172.24.4.1 port 41190 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:44.657630 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:44.672605 systemd-logind[1555]: New session 17 of user core. Jul 7 00:55:44.683448 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:55:45.413320 sshd[6330]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:45.422680 systemd[1]: sshd@14-172.24.4.161:22-172.24.4.1:41190.service: Deactivated successfully. Jul 7 00:55:45.437589 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:55:45.437736 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:55:45.442592 systemd-logind[1555]: Removed session 17. Jul 7 00:55:50.438557 systemd[1]: Started sshd@15-172.24.4.161:22-172.24.4.1:41194.service - OpenSSH per-connection server daemon (172.24.4.1:41194). Jul 7 00:55:51.643190 sshd[6369]: Accepted publickey for core from 172.24.4.1 port 41194 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:51.647766 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:51.662985 systemd-logind[1555]: New session 18 of user core. Jul 7 00:55:51.670086 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:55:52.530756 sshd[6369]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:52.547640 systemd[1]: sshd@15-172.24.4.161:22-172.24.4.1:41194.service: Deactivated successfully. Jul 7 00:55:52.561135 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:55:52.564115 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:55:52.568633 systemd-logind[1555]: Removed session 18. Jul 7 00:55:57.550719 systemd[1]: Started sshd@16-172.24.4.161:22-172.24.4.1:51260.service - OpenSSH per-connection server daemon (172.24.4.1:51260). Jul 7 00:55:58.722015 sshd[6447]: Accepted publickey for core from 172.24.4.1 port 51260 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:55:58.730045 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:55:58.755848 systemd-logind[1555]: New session 19 of user core. Jul 7 00:55:58.765446 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:55:59.463404 sshd[6447]: pam_unix(sshd:session): session closed for user core Jul 7 00:55:59.470853 systemd[1]: sshd@16-172.24.4.161:22-172.24.4.1:51260.service: Deactivated successfully. Jul 7 00:55:59.478473 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:55:59.479647 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:55:59.482962 systemd-logind[1555]: Removed session 19. Jul 7 00:56:04.480069 systemd[1]: Started sshd@17-172.24.4.161:22-172.24.4.1:56172.service - OpenSSH per-connection server daemon (172.24.4.1:56172). Jul 7 00:56:05.786474 sshd[6461]: Accepted publickey for core from 172.24.4.1 port 56172 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:05.790342 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:05.807876 systemd-logind[1555]: New session 20 of user core. Jul 7 00:56:05.815877 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:56:06.440516 sshd[6461]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:06.442258 systemd[1]: Started sshd@18-172.24.4.161:22-172.24.4.1:56184.service - OpenSSH per-connection server daemon (172.24.4.1:56184). Jul 7 00:56:06.455148 systemd[1]: sshd@17-172.24.4.161:22-172.24.4.1:56172.service: Deactivated successfully. Jul 7 00:56:06.463124 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:56:06.468450 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:56:06.474416 systemd-logind[1555]: Removed session 20. Jul 7 00:56:07.801551 sshd[6472]: Accepted publickey for core from 172.24.4.1 port 56184 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:07.803687 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:07.816656 systemd-logind[1555]: New session 21 of user core. Jul 7 00:56:07.823196 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:56:09.167003 sshd[6472]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:09.178991 systemd[1]: Started sshd@19-172.24.4.161:22-172.24.4.1:56190.service - OpenSSH per-connection server daemon (172.24.4.1:56190). Jul 7 00:56:09.249696 systemd[1]: sshd@18-172.24.4.161:22-172.24.4.1:56184.service: Deactivated successfully. Jul 7 00:56:09.261420 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:56:09.261651 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:56:09.269243 systemd-logind[1555]: Removed session 21. Jul 7 00:56:11.091645 sshd[6484]: Accepted publickey for core from 172.24.4.1 port 56190 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:11.099790 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:11.111237 systemd-logind[1555]: New session 22 of user core. Jul 7 00:56:11.119030 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:56:15.006602 sshd[6484]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:15.020619 systemd[1]: Started sshd@20-172.24.4.161:22-172.24.4.1:39080.service - OpenSSH per-connection server daemon (172.24.4.1:39080). Jul 7 00:56:15.024119 systemd[1]: sshd@19-172.24.4.161:22-172.24.4.1:56190.service: Deactivated successfully. Jul 7 00:56:15.053737 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:56:15.061643 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:56:15.068474 systemd-logind[1555]: Removed session 22. Jul 7 00:56:16.448334 sshd[6523]: Accepted publickey for core from 172.24.4.1 port 39080 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:16.452177 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:16.469482 systemd-logind[1555]: New session 23 of user core. Jul 7 00:56:16.478108 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:56:17.683124 sshd[6523]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:17.703757 systemd[1]: Started sshd@21-172.24.4.161:22-172.24.4.1:39092.service - OpenSSH per-connection server daemon (172.24.4.1:39092). Jul 7 00:56:17.710198 systemd[1]: sshd@20-172.24.4.161:22-172.24.4.1:39080.service: Deactivated successfully. Jul 7 00:56:17.723192 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:56:17.728072 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:56:17.734059 systemd-logind[1555]: Removed session 23. Jul 7 00:56:18.868568 sshd[6535]: Accepted publickey for core from 172.24.4.1 port 39092 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:18.872330 sshd[6535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:18.886075 systemd-logind[1555]: New session 24 of user core. Jul 7 00:56:18.893981 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:56:19.594755 sshd[6535]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:19.607434 systemd[1]: sshd@21-172.24.4.161:22-172.24.4.1:39092.service: Deactivated successfully. Jul 7 00:56:19.615638 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:56:19.617341 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:56:19.620139 systemd-logind[1555]: Removed session 24. Jul 7 00:56:24.634590 systemd[1]: Started sshd@22-172.24.4.161:22-172.24.4.1:43792.service - OpenSSH per-connection server daemon (172.24.4.1:43792). Jul 7 00:56:26.484946 systemd[1]: run-containerd-runc-k8s.io-fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714-runc.QLVDSX.mount: Deactivated successfully. Jul 7 00:56:26.917920 sshd[6554]: Accepted publickey for core from 172.24.4.1 port 43792 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:26.922924 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:26.965198 systemd-logind[1555]: New session 25 of user core. Jul 7 00:56:26.969725 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:56:27.996594 sshd[6554]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:28.000866 systemd[1]: sshd@22-172.24.4.161:22-172.24.4.1:43792.service: Deactivated successfully. Jul 7 00:56:28.011713 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:56:28.012249 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:56:28.015952 systemd-logind[1555]: Removed session 25. Jul 7 00:56:32.839122 systemd[1]: Started sshd@23-172.24.4.161:22-172.24.4.1:43800.service - OpenSSH per-connection server daemon (172.24.4.1:43800). Jul 7 00:56:34.233092 sshd[6640]: Accepted publickey for core from 172.24.4.1 port 43800 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:34.236184 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:34.249339 systemd-logind[1555]: New session 26 of user core. Jul 7 00:56:34.265284 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:56:36.566862 sshd[6640]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:36.575441 systemd[1]: sshd@23-172.24.4.161:22-172.24.4.1:43800.service: Deactivated successfully. Jul 7 00:56:36.590071 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:56:36.591207 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:56:36.598190 systemd-logind[1555]: Removed session 26. Jul 7 00:56:40.573659 systemd[1]: Started sshd@24-172.24.4.161:22-172.24.4.1:45026.service - OpenSSH per-connection server daemon (172.24.4.1:45026). Jul 7 00:56:42.651691 sshd[6658]: Accepted publickey for core from 172.24.4.1 port 45026 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:56:42.654934 sshd[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:56:42.668137 systemd-logind[1555]: New session 27 of user core. Jul 7 00:56:42.674117 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 00:56:43.632054 systemd[1]: run-containerd-runc-k8s.io-9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060-runc.MNfjnm.mount: Deactivated successfully. Jul 7 00:56:43.829854 sshd[6658]: pam_unix(sshd:session): session closed for user core Jul 7 00:56:43.836111 systemd[1]: sshd@24-172.24.4.161:22-172.24.4.1:45026.service: Deactivated successfully. Jul 7 00:56:43.845055 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Jul 7 00:56:43.847712 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 00:56:43.850517 systemd-logind[1555]: Removed session 27. Jul 7 00:56:49.138641 systemd[1]: Started sshd@25-172.24.4.161:22-172.24.4.1:37188.service - OpenSSH per-connection server daemon (172.24.4.1:37188). Jul 7 00:56:59.038096 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:56:59.100309 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:56:51.427537 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:56:51.427594 systemd-resolved[1470]: Flushed all caches. Jul 7 00:56:58.996587 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:56:58.996646 systemd-resolved[1470]: Flushed all caches. Jul 7 00:56:59.387816 systemd[1]: run-containerd-runc-k8s.io-fe49e3abea5c9a8ecd3dfea5ea09d90368ddf75700341bad407e06fc5a7a0714-runc.IEK22h.mount: Deactivated successfully. Jul 7 00:56:59.418964 systemd[1]: run-containerd-runc-k8s.io-b512b139420bd54ca164be970607de3beb493a7548b2225444ae7b46daecf6ba-runc.ZOmZ3W.mount: Deactivated successfully. Jul 7 00:57:06.778443 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:57:06.779016 containerd[1579]: time="2025-07-07T00:57:06.540193950Z" level=error msg="post event" error="context deadline exceeded" Jul 7 00:57:06.779016 containerd[1579]: time="2025-07-07T00:57:06.523644303Z" level=error msg="post event" error="context deadline exceeded" Jul 7 00:57:06.519575 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:57:06.519585 systemd-resolved[1470]: Flushed all caches. Jul 7 00:57:06.846036 containerd[1579]: time="2025-07-07T00:57:06.598501532Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Jul 7 00:57:06.846036 containerd[1579]: time="2025-07-07T00:57:06.816201101Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Jul 7 00:57:06.839222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6-rootfs.mount: Deactivated successfully. Jul 7 00:57:06.921518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7-rootfs.mount: Deactivated successfully. Jul 7 00:57:06.933157 containerd[1579]: time="2025-07-07T00:57:06.848601224Z" level=info msg="shim disconnected" id=0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6 namespace=k8s.io Jul 7 00:57:06.933157 containerd[1579]: time="2025-07-07T00:57:06.930570094Z" level=warning msg="cleaning up after shim disconnected" id=0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6 namespace=k8s.io Jul 7 00:57:06.933157 containerd[1579]: time="2025-07-07T00:57:06.930601402Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:57:06.949637 containerd[1579]: time="2025-07-07T00:57:06.948788075Z" level=info msg="shim disconnected" id=e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7 namespace=k8s.io Jul 7 00:57:06.958050 containerd[1579]: time="2025-07-07T00:57:06.957988248Z" level=warning msg="cleaning up after shim disconnected" id=e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7 namespace=k8s.io Jul 7 00:57:06.958270 containerd[1579]: time="2025-07-07T00:57:06.958250371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:57:07.031457 containerd[1579]: time="2025-07-07T00:57:07.030868323Z" level=info msg="shim disconnected" id=e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0 namespace=k8s.io Jul 7 00:57:07.035398 containerd[1579]: time="2025-07-07T00:57:07.031710865Z" level=warning msg="cleaning up after shim disconnected" id=e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0 namespace=k8s.io Jul 7 00:57:07.038516 containerd[1579]: time="2025-07-07T00:57:07.036250263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:57:07.042784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0-rootfs.mount: Deactivated successfully. Jul 7 00:57:09.151398 kubelet[2793]: I0707 00:57:09.151184 2793 scope.go:117] "RemoveContainer" containerID="e8b782a70d84eed6d61fc9c3b22fa7f819696a2605f24c2a9b683e776b37eeb7" Jul 7 00:57:09.169121 kubelet[2793]: I0707 00:57:09.166209 2793 scope.go:117] "RemoveContainer" containerID="e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0" Jul 7 00:57:09.180555 containerd[1579]: time="2025-07-07T00:57:09.180480127Z" level=info msg="CreateContainer within sandbox \"611837bb167ad87c5feceb9e4a059297f2ea5d18ad1d20b462ddd0cc47209e8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 00:57:09.185594 containerd[1579]: time="2025-07-07T00:57:09.184375775Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 00:57:09.186268 kubelet[2793]: I0707 00:57:09.185332 2793 scope.go:117] "RemoveContainer" containerID="0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6" Jul 7 00:57:09.196301 containerd[1579]: time="2025-07-07T00:57:09.196233539Z" level=info msg="CreateContainer within sandbox \"1ac608acb7045d5d1ec660495a9f6a294ed7d1fcac139b0cf8de0f0968e003d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 00:57:10.464526 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:57:09.704948 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:57:10.465088 sshd[6697]: Accepted publickey for core from 172.24.4.1 port 37188 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:57:09.705009 systemd-resolved[1470]: Flushed all caches. Jul 7 00:57:10.575862 sshd[6697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:57:10.595466 systemd-logind[1555]: New session 28 of user core. Jul 7 00:57:10.604973 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 00:57:21.184509 systemd-journald[1116]: Under memory pressure, flushing caches. Jul 7 00:57:11.720501 systemd-resolved[1470]: Under memory pressure, flushing caches. Jul 7 00:57:21.283778 kubelet[2793]: I0707 00:57:16.265984 2793 status_manager.go:875] "Failed to update status for pod" pod="kube-system/kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ff8c0e1-46d1-4868-a2d7-e444fc56d588\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-07T00:57:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-07T00:57:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"containerd://0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6\\\",\\\"image\\\":\\\"registry.k8s.io/kube-scheduler:v1.31.10\\\",\\\"imageID\\\":\\\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"containerd://0f2178ad7463591f9bb2fd6b0418206805260d3cd33cf0f37d893c915a9980b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-07-07T00:56:59Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-07-07T00:52:40Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-scheduler-ci-4081-3-4-7-8dfaddf5bb.novalocal\": etcdserver: request timed out" Jul 7 00:57:21.283778 kubelet[2793]: E0707 00:57:16.731911 2793 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 7 00:57:11.720553 systemd-resolved[1470]: Flushed all caches. Jul 7 00:57:21.932960 kubelet[2793]: E0707 00:57:21.932535 2793 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-7-8dfaddf5bb.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 00:57:22.257183 containerd[1579]: time="2025-07-07T00:57:22.256336442Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\"" Jul 7 00:57:22.263159 containerd[1579]: time="2025-07-07T00:57:22.262176040Z" level=info msg="StartContainer for \"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\"" Jul 7 00:57:22.821476 containerd[1579]: time="2025-07-07T00:57:22.820313843Z" level=info msg="CreateContainer within sandbox \"1ac608acb7045d5d1ec660495a9f6a294ed7d1fcac139b0cf8de0f0968e003d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b080a02f526c2e47455fc95852f10fcfbae86e859f4b8dc57283862bbcee3794\"" Jul 7 00:57:22.821476 containerd[1579]: time="2025-07-07T00:57:22.820717652Z" level=info msg="StartContainer for \"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\" returns successfully" Jul 7 00:57:22.863723 containerd[1579]: time="2025-07-07T00:57:22.822806585Z" level=info msg="StartContainer for \"b080a02f526c2e47455fc95852f10fcfbae86e859f4b8dc57283862bbcee3794\"" Jul 7 00:57:22.902248 systemd[1]: run-containerd-runc-k8s.io-b080a02f526c2e47455fc95852f10fcfbae86e859f4b8dc57283862bbcee3794-runc.stRW8f.mount: Deactivated successfully. Jul 7 00:57:22.953827 containerd[1579]: time="2025-07-07T00:57:22.953264775Z" level=info msg="CreateContainer within sandbox \"611837bb167ad87c5feceb9e4a059297f2ea5d18ad1d20b462ddd0cc47209e8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3dec7bc9ec492fea2e7a84cd0e6e923adbde8805617ae8ecff838795f1c571f9\"" Jul 7 00:57:22.955140 containerd[1579]: time="2025-07-07T00:57:22.954849042Z" level=info msg="StartContainer for \"3dec7bc9ec492fea2e7a84cd0e6e923adbde8805617ae8ecff838795f1c571f9\"" Jul 7 00:57:23.156215 containerd[1579]: time="2025-07-07T00:57:23.155952077Z" level=info msg="StartContainer for \"b080a02f526c2e47455fc95852f10fcfbae86e859f4b8dc57283862bbcee3794\" returns successfully" Jul 7 00:57:23.289065 containerd[1579]: time="2025-07-07T00:57:23.286169575Z" level=info msg="StartContainer for \"3dec7bc9ec492fea2e7a84cd0e6e923adbde8805617ae8ecff838795f1c571f9\" returns successfully" Jul 7 00:57:33.050204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d-rootfs.mount: Deactivated successfully. Jul 7 00:57:38.542176 systemd[1]: Started sshd@26-172.24.4.161:22-172.24.4.1:52840.service - OpenSSH per-connection server daemon (172.24.4.1:52840). Jul 7 00:57:46.277243 containerd[1579]: time="2025-07-07T00:57:41.591570996Z" level=error msg="collecting metrics for b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d" error="cgroups: cgroup deleted: unknown" Jul 7 00:57:46.277243 containerd[1579]: time="2025-07-07T00:57:46.231745994Z" level=error msg="failed to handle container TaskExit event container_id:\"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\" id:\"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\" pid:6908 exit_status:1 exited_at:{seconds:1751849852 nanos:973897039}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Jul 7 00:57:46.279933 kubelet[2793]: E0707 00:57:39.537821 2793 controller.go:195] "Failed to update lease" err="etcdserver: request timed out" Jul 7 00:57:46.279933 kubelet[2793]: E0707 00:57:46.116528 2793 event.go:359] "Server rejected event (will not retry!)" err="etcdserver: request timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal.184fd220faad7153 kube-system 1658 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal,UID:f1565a39f14f48843a73850a6270528b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-7-8dfaddf5bb.novalocal,},FirstTimestamp:2025-07-07 00:56:59 +0000 UTC,LastTimestamp:2025-07-07 00:57:30.060385127 +0000 UTC m=+283.871063878,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-7-8dfaddf5bb.novalocal,}" Jul 7 00:57:38.817860 sshd[6697]: pam_unix(sshd:session): session closed for user core Jul 7 00:57:38.835803 systemd[1]: sshd@25-172.24.4.161:22-172.24.4.1:37188.service: Deactivated successfully. Jul 7 00:57:38.847935 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 00:57:38.851743 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Jul 7 00:57:38.857343 systemd-logind[1555]: Removed session 28. Jul 7 00:57:46.464273 kubelet[2793]: E0707 00:57:46.464113 2793 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"ci-4081-3-4-7-8dfaddf5bb.novalocal\": the object has been modified; please apply your changes to the latest version and try again" Jul 7 00:57:46.569401 containerd[1579]: time="2025-07-07T00:57:46.566564272Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Jul 7 00:57:47.310786 sshd[7081]: Accepted publickey for core from 172.24.4.1 port 52840 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:57:47.314484 sshd[7081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:57:47.327645 systemd-logind[1555]: New session 29 of user core. Jul 7 00:57:47.341441 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 00:57:47.605949 containerd[1579]: time="2025-07-07T00:57:47.605417206Z" level=info msg="TaskExit event container_id:\"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\" id:\"b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d\" pid:6908 exit_status:1 exited_at:{seconds:1751849852 nanos:973897039}" Jul 7 00:57:47.611254 containerd[1579]: time="2025-07-07T00:57:47.610564422Z" level=info msg="shim disconnected" id=b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d namespace=k8s.io Jul 7 00:57:47.611254 containerd[1579]: time="2025-07-07T00:57:47.610656494Z" level=warning msg="cleaning up after shim disconnected" id=b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d namespace=k8s.io Jul 7 00:57:47.611254 containerd[1579]: time="2025-07-07T00:57:47.610694867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:57:47.757943 containerd[1579]: time="2025-07-07T00:57:47.757855303Z" level=info msg="Ensure that container b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d in task-service has been cleanup successfully" Jul 7 00:57:48.093583 sshd[7081]: pam_unix(sshd:session): session closed for user core Jul 7 00:57:48.100061 systemd[1]: sshd@26-172.24.4.161:22-172.24.4.1:52840.service: Deactivated successfully. Jul 7 00:57:48.112597 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Jul 7 00:57:48.113764 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 00:57:48.117224 systemd-logind[1555]: Removed session 29. Jul 7 00:57:48.509726 kubelet[2793]: I0707 00:57:48.508259 2793 scope.go:117] "RemoveContainer" containerID="e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0" Jul 7 00:57:48.509726 kubelet[2793]: I0707 00:57:48.508923 2793 scope.go:117] "RemoveContainer" containerID="b49452ff46979115c2e332c49acaeae8eaba7186328b9356561eda0638a2ce9d" Jul 7 00:57:48.520853 containerd[1579]: time="2025-07-07T00:57:48.520254608Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jul 7 00:57:48.548321 containerd[1579]: time="2025-07-07T00:57:48.547432105Z" level=info msg="RemoveContainer for \"e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0\"" Jul 7 00:57:48.569118 containerd[1579]: time="2025-07-07T00:57:48.568248758Z" level=info msg="RemoveContainer for \"e3f9c3b1dcfe417dcaea5e8b5537f4e4bafecc49c1d178836640e4b45d4e27b0\" returns successfully" Jul 7 00:57:48.584575 containerd[1579]: time="2025-07-07T00:57:48.584360234Z" level=info msg="CreateContainer within sandbox \"52641746f31c4e78d16e446d2c37622b1862d6c871a68505ded44e17a8e5fc7e\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"37c61d6b30e16a0c164f59729da4f3ea3a4030cae0a59ddae367a3fb3d41f40b\"" Jul 7 00:57:48.587699 containerd[1579]: time="2025-07-07T00:57:48.587544964Z" level=info msg="StartContainer for \"37c61d6b30e16a0c164f59729da4f3ea3a4030cae0a59ddae367a3fb3d41f40b\"" Jul 7 00:57:48.699374 containerd[1579]: time="2025-07-07T00:57:48.698031258Z" level=info msg="StartContainer for \"37c61d6b30e16a0c164f59729da4f3ea3a4030cae0a59ddae367a3fb3d41f40b\" returns successfully" Jul 7 00:57:53.119646 systemd[1]: Started sshd@27-172.24.4.161:22-172.24.4.1:53568.service - OpenSSH per-connection server daemon (172.24.4.1:53568). Jul 7 00:57:54.233946 sshd[7177]: Accepted publickey for core from 172.24.4.1 port 53568 ssh2: RSA SHA256:T8A8R3rUntE7f376/e5VUyp2qo4ckx8uZO3F4EBHBjc Jul 7 00:57:54.239758 sshd[7177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:57:54.256905 systemd-logind[1555]: New session 30 of user core. Jul 7 00:57:54.270386 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 7 00:57:55.070528 sshd[7177]: pam_unix(sshd:session): session closed for user core Jul 7 00:57:55.078081 systemd[1]: sshd@27-172.24.4.161:22-172.24.4.1:53568.service: Deactivated successfully. Jul 7 00:57:55.090320 systemd[1]: session-30.scope: Deactivated successfully. Jul 7 00:57:55.092847 systemd-logind[1555]: Session 30 logged out. Waiting for processes to exit. Jul 7 00:57:55.097764 systemd-logind[1555]: Removed session 30. Jul 7 00:57:57.324170 systemd[1]: run-containerd-runc-k8s.io-9f8fdc04b2289e7a4c43773b71be993b00bc68dc0dd9514a35844d50a5928060-runc.kfOOlV.mount: Deactivated successfully. Jul 7 00:58:15.013690 kubelet[2793]: E0707 00:58:15.013205 2793 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.6s" Jul 7 00:58:57.034018 kubelet[2793]: E0707 00:58:57.033090 2793 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.591s" Jul 7 00:58:59.694781 kubelet[2793]: E0707 00:58:59.694611 2793 controller.go:195] "Failed to update lease" err="Put \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-8dfaddf5bb.novalocal?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 00:59:00.454298 kubelet[2793]: E0707 00:59:00.454239 2793 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/pod71af1a208fd8b2e8ada0b973b3974e53/3dec7bc9ec492fea2e7a84cd0e6e923adbde8805617ae8ecff838795f1c571f9\": RecentStats: unable to find data in memory cache]" Jul 7 00:59:01.063988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dec7bc9ec492fea2e7a84cd0e6e923adbde8805617ae8ecff838795f1c571f9-rootfs.mount: Deactivated successfully. Jul 7 00:59:07.027268 kubelet[2793]: E0707 00:59:07.026878 2793 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal.184fd220faad7153 kube-system 1752 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-7-8dfaddf5bb.novalocal,UID:f1565a39f14f48843a73850a6270528b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-7-8dfaddf5bb.novalocal,},FirstTimestamp:2025-07-07 00:56:59 +0000 UTC,LastTimestamp:2025-07-07 00:58:56.980655995 +0000 UTC m=+370.791334746,Count:12,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-7-8dfaddf5bb.novalocal,}"