Jul 7 01:42:31.086302 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 01:42:31.086346 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:42:31.086357 kernel: BIOS-provided physical RAM map: Jul 7 01:42:31.086365 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 01:42:31.086372 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 01:42:31.086384 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 01:42:31.086393 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 7 01:42:31.086401 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 7 01:42:31.086409 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 01:42:31.086416 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 01:42:31.086424 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 7 01:42:31.086432 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 01:42:31.086467 kernel: NX (Execute Disable) protection: active Jul 7 01:42:31.086477 kernel: APIC: Static calls initialized Jul 7 01:42:31.086490 kernel: SMBIOS 3.0.0 present. Jul 7 01:42:31.086503 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 7 01:42:31.086512 kernel: Hypervisor detected: KVM Jul 7 01:42:31.086519 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 01:42:31.086528 kernel: kvm-clock: using sched offset of 3755824875 cycles Jul 7 01:42:31.086540 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 01:42:31.086548 kernel: tsc: Detected 1996.249 MHz processor Jul 7 01:42:31.086557 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 01:42:31.086566 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 01:42:31.086575 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 7 01:42:31.086583 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 01:42:31.086592 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 01:42:31.086600 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 7 01:42:31.086608 kernel: ACPI: Early table checksum verification disabled Jul 7 01:42:31.086621 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 7 01:42:31.086631 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:42:31.086639 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:42:31.086647 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:42:31.086656 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 7 01:42:31.086664 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:42:31.086673 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 01:42:31.086681 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 7 01:42:31.086689 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 7 01:42:31.086700 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 7 01:42:31.086708 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 7 01:42:31.086716 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 7 01:42:31.086728 kernel: No NUMA configuration found Jul 7 01:42:31.086737 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 7 01:42:31.086745 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jul 7 01:42:31.086757 kernel: Zone ranges: Jul 7 01:42:31.086765 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 01:42:31.086778 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 01:42:31.086786 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:42:31.086795 kernel: Movable zone start for each node Jul 7 01:42:31.086803 kernel: Early memory node ranges Jul 7 01:42:31.086812 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 01:42:31.086820 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 7 01:42:31.086832 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 7 01:42:31.086841 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 7 01:42:31.086849 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 01:42:31.086858 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 01:42:31.086867 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 7 01:42:31.086875 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 01:42:31.086884 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 01:42:31.086896 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 01:42:31.086905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 01:42:31.086916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 01:42:31.086925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 01:42:31.086933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 01:42:31.086941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 01:42:31.086950 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 01:42:31.086959 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 7 01:42:31.086967 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 01:42:31.086975 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 7 01:42:31.086984 kernel: Booting paravirtualized kernel on KVM Jul 7 01:42:31.086995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 01:42:31.087004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 01:42:31.087016 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 7 01:42:31.087025 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 7 01:42:31.087033 kernel: pcpu-alloc: [0] 0 1 Jul 7 01:42:31.087041 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 7 01:42:31.087052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:42:31.087062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 01:42:31.087073 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 01:42:31.087082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:42:31.087090 kernel: Fallback order for Node 0: 0 Jul 7 01:42:31.087099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 7 01:42:31.087108 kernel: Policy zone: Normal Jul 7 01:42:31.087116 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 01:42:31.087125 kernel: software IO TLB: area num 2. Jul 7 01:42:31.087133 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 227308K reserved, 0K cma-reserved) Jul 7 01:42:31.087142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 01:42:31.087153 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 01:42:31.087162 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 01:42:31.087170 kernel: Dynamic Preempt: voluntary Jul 7 01:42:31.087179 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 01:42:31.087205 kernel: rcu: RCU event tracing is enabled. Jul 7 01:42:31.087220 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 01:42:31.087229 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 01:42:31.087237 kernel: Rude variant of Tasks RCU enabled. Jul 7 01:42:31.087260 kernel: Tracing variant of Tasks RCU enabled. Jul 7 01:42:31.087286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 01:42:31.087298 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 01:42:31.087307 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 7 01:42:31.087316 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 01:42:31.087324 kernel: Console: colour VGA+ 80x25 Jul 7 01:42:31.087332 kernel: printk: console [tty0] enabled Jul 7 01:42:31.087341 kernel: printk: console [ttyS0] enabled Jul 7 01:42:31.087350 kernel: ACPI: Core revision 20230628 Jul 7 01:42:31.087358 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 01:42:31.087370 kernel: x2apic enabled Jul 7 01:42:31.087383 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 01:42:31.087392 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 01:42:31.087400 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 01:42:31.087409 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 7 01:42:31.087417 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 01:42:31.087426 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 01:42:31.087435 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 01:42:31.087443 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 01:42:31.087468 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 01:42:31.087477 kernel: Speculative Store Bypass: Vulnerable Jul 7 01:42:31.087485 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 7 01:42:31.087494 kernel: Freeing SMP alternatives memory: 32K Jul 7 01:42:31.087503 kernel: pid_max: default: 32768 minimum: 301 Jul 7 01:42:31.087519 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 01:42:31.087530 kernel: landlock: Up and running. Jul 7 01:42:31.087539 kernel: SELinux: Initializing. Jul 7 01:42:31.087548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:42:31.087558 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 01:42:31.087567 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 7 01:42:31.087577 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:42:31.087589 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:42:31.087598 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 01:42:31.087611 kernel: Performance Events: AMD PMU driver. Jul 7 01:42:31.087620 kernel: ... version: 0 Jul 7 01:42:31.087629 kernel: ... bit width: 48 Jul 7 01:42:31.087641 kernel: ... generic registers: 4 Jul 7 01:42:31.087650 kernel: ... value mask: 0000ffffffffffff Jul 7 01:42:31.087659 kernel: ... max period: 00007fffffffffff Jul 7 01:42:31.087668 kernel: ... fixed-purpose events: 0 Jul 7 01:42:31.087677 kernel: ... event mask: 000000000000000f Jul 7 01:42:31.087686 kernel: signal: max sigframe size: 1440 Jul 7 01:42:31.087695 kernel: rcu: Hierarchical SRCU implementation. Jul 7 01:42:31.087704 kernel: rcu: Max phase no-delay instances is 400. Jul 7 01:42:31.087713 kernel: smp: Bringing up secondary CPUs ... Jul 7 01:42:31.087724 kernel: smpboot: x86: Booting SMP configuration: Jul 7 01:42:31.087733 kernel: .... node #0, CPUs: #1 Jul 7 01:42:31.087742 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 01:42:31.087751 kernel: smpboot: Max logical packages: 2 Jul 7 01:42:31.087760 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 7 01:42:31.087769 kernel: devtmpfs: initialized Jul 7 01:42:31.087778 kernel: x86/mm: Memory block size: 128MB Jul 7 01:42:31.087787 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 01:42:31.087796 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 01:42:31.087808 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 01:42:31.087817 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 01:42:31.087826 kernel: audit: initializing netlink subsys (disabled) Jul 7 01:42:31.087835 kernel: audit: type=2000 audit(1751852550.162:1): state=initialized audit_enabled=0 res=1 Jul 7 01:42:31.087847 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 01:42:31.087856 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 01:42:31.087865 kernel: cpuidle: using governor menu Jul 7 01:42:31.087874 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 01:42:31.087883 kernel: dca service started, version 1.12.1 Jul 7 01:42:31.087894 kernel: PCI: Using configuration type 1 for base access Jul 7 01:42:31.087903 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 01:42:31.087912 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 01:42:31.087921 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 01:42:31.087930 kernel: ACPI: Added _OSI(Module Device) Jul 7 01:42:31.087940 kernel: ACPI: Added _OSI(Processor Device) Jul 7 01:42:31.087949 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 01:42:31.087958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 01:42:31.087967 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 01:42:31.087978 kernel: ACPI: Interpreter enabled Jul 7 01:42:31.087997 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 01:42:31.088006 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 01:42:31.088015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 01:42:31.088025 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 01:42:31.088034 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 7 01:42:31.088043 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 01:42:31.088223 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:42:31.088342 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 7 01:42:31.090477 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 7 01:42:31.090503 kernel: acpiphp: Slot [3] registered Jul 7 01:42:31.090514 kernel: acpiphp: Slot [4] registered Jul 7 01:42:31.090523 kernel: acpiphp: Slot [5] registered Jul 7 01:42:31.090532 kernel: acpiphp: Slot [6] registered Jul 7 01:42:31.090541 kernel: acpiphp: Slot [7] registered Jul 7 01:42:31.090550 kernel: acpiphp: Slot [8] registered Jul 7 01:42:31.090559 kernel: acpiphp: Slot [9] registered Jul 7 01:42:31.090573 kernel: acpiphp: Slot [10] registered Jul 7 01:42:31.090582 kernel: acpiphp: Slot [11] registered Jul 7 01:42:31.090591 kernel: acpiphp: Slot [12] registered Jul 7 01:42:31.090600 kernel: acpiphp: Slot [13] registered Jul 7 01:42:31.090609 kernel: acpiphp: Slot [14] registered Jul 7 01:42:31.090618 kernel: acpiphp: Slot [15] registered Jul 7 01:42:31.090627 kernel: acpiphp: Slot [16] registered Jul 7 01:42:31.090636 kernel: acpiphp: Slot [17] registered Jul 7 01:42:31.090645 kernel: acpiphp: Slot [18] registered Jul 7 01:42:31.090656 kernel: acpiphp: Slot [19] registered Jul 7 01:42:31.090665 kernel: acpiphp: Slot [20] registered Jul 7 01:42:31.090676 kernel: acpiphp: Slot [21] registered Jul 7 01:42:31.090684 kernel: acpiphp: Slot [22] registered Jul 7 01:42:31.090694 kernel: acpiphp: Slot [23] registered Jul 7 01:42:31.090703 kernel: acpiphp: Slot [24] registered Jul 7 01:42:31.090711 kernel: acpiphp: Slot [25] registered Jul 7 01:42:31.090720 kernel: acpiphp: Slot [26] registered Jul 7 01:42:31.090729 kernel: acpiphp: Slot [27] registered Jul 7 01:42:31.090738 kernel: acpiphp: Slot [28] registered Jul 7 01:42:31.090749 kernel: acpiphp: Slot [29] registered Jul 7 01:42:31.090758 kernel: acpiphp: Slot [30] registered Jul 7 01:42:31.090767 kernel: acpiphp: Slot [31] registered Jul 7 01:42:31.090776 kernel: PCI host bridge to bus 0000:00 Jul 7 01:42:31.090897 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 01:42:31.090990 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 01:42:31.091079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 01:42:31.091173 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 01:42:31.091262 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 7 01:42:31.091350 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 01:42:31.091498 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 7 01:42:31.091618 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 7 01:42:31.091725 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 7 01:42:31.091822 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 7 01:42:31.091928 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 7 01:42:31.092038 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 7 01:42:31.092137 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 7 01:42:31.092236 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 7 01:42:31.092414 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 7 01:42:31.094576 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 7 01:42:31.094696 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 7 01:42:31.094806 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 7 01:42:31.094907 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 7 01:42:31.095009 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jul 7 01:42:31.095108 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 7 01:42:31.095209 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 7 01:42:31.095307 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 01:42:31.095423 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 01:42:31.096648 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 7 01:42:31.096754 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 7 01:42:31.096853 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jul 7 01:42:31.096950 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 7 01:42:31.097058 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 01:42:31.097158 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 01:42:31.097265 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 7 01:42:31.099554 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 7 01:42:31.099696 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 7 01:42:31.099805 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 7 01:42:31.099906 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 7 01:42:31.100041 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 01:42:31.100144 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 7 01:42:31.100254 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jul 7 01:42:31.100353 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jul 7 01:42:31.100367 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 01:42:31.100377 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 01:42:31.100387 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 01:42:31.100396 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 01:42:31.100405 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 7 01:42:31.100415 kernel: iommu: Default domain type: Translated Jul 7 01:42:31.100428 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 01:42:31.100438 kernel: PCI: Using ACPI for IRQ routing Jul 7 01:42:31.100477 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 01:42:31.100488 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 01:42:31.100498 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 7 01:42:31.100603 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 7 01:42:31.100704 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 7 01:42:31.100804 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 01:42:31.100818 kernel: vgaarb: loaded Jul 7 01:42:31.100832 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 01:42:31.100841 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 01:42:31.100851 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 01:42:31.100860 kernel: pnp: PnP ACPI init Jul 7 01:42:31.106700 kernel: pnp 00:03: [dma 2] Jul 7 01:42:31.106730 kernel: pnp: PnP ACPI: found 5 devices Jul 7 01:42:31.106741 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 01:42:31.106751 kernel: NET: Registered PF_INET protocol family Jul 7 01:42:31.106767 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 01:42:31.106776 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 01:42:31.106786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 01:42:31.106795 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 01:42:31.106811 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 01:42:31.106829 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 01:42:31.106849 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:42:31.106874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 01:42:31.106885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 01:42:31.106898 kernel: NET: Registered PF_XDP protocol family Jul 7 01:42:31.107003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 01:42:31.107088 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 01:42:31.107198 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 01:42:31.107285 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 7 01:42:31.107376 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 7 01:42:31.107592 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 7 01:42:31.107699 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 01:42:31.107719 kernel: PCI: CLS 0 bytes, default 64 Jul 7 01:42:31.107728 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 01:42:31.107738 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 7 01:42:31.107747 kernel: Initialise system trusted keyrings Jul 7 01:42:31.107757 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 01:42:31.107766 kernel: Key type asymmetric registered Jul 7 01:42:31.107775 kernel: Asymmetric key parser 'x509' registered Jul 7 01:42:31.107784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 01:42:31.107792 kernel: io scheduler mq-deadline registered Jul 7 01:42:31.107804 kernel: io scheduler kyber registered Jul 7 01:42:31.107813 kernel: io scheduler bfq registered Jul 7 01:42:31.107822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 01:42:31.107832 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 7 01:42:31.107841 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 7 01:42:31.107850 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 7 01:42:31.107859 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 7 01:42:31.107868 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 01:42:31.107878 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 01:42:31.107889 kernel: random: crng init done Jul 7 01:42:31.107899 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 01:42:31.107908 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 01:42:31.107917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 01:42:31.107926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 01:42:31.108047 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 01:42:31.108139 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 01:42:31.108227 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T01:42:30 UTC (1751852550) Jul 7 01:42:31.108321 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 01:42:31.108335 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 01:42:31.108344 kernel: NET: Registered PF_INET6 protocol family Jul 7 01:42:31.108353 kernel: Segment Routing with IPv6 Jul 7 01:42:31.108362 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 01:42:31.108372 kernel: NET: Registered PF_PACKET protocol family Jul 7 01:42:31.108381 kernel: Key type dns_resolver registered Jul 7 01:42:31.108389 kernel: IPI shorthand broadcast: enabled Jul 7 01:42:31.108399 kernel: sched_clock: Marking stable (1920008083, 173684148)->(2147965329, -54273098) Jul 7 01:42:31.108412 kernel: registered taskstats version 1 Jul 7 01:42:31.108421 kernel: Loading compiled-in X.509 certificates Jul 7 01:42:31.108430 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 01:42:31.108439 kernel: Key type .fscrypt registered Jul 7 01:42:31.108472 kernel: Key type fscrypt-provisioning registered Jul 7 01:42:31.108482 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 01:42:31.108492 kernel: ima: Allocated hash algorithm: sha1 Jul 7 01:42:31.108500 kernel: ima: No architecture policies found Jul 7 01:42:31.108513 kernel: clk: Disabling unused clocks Jul 7 01:42:31.108522 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 01:42:31.108532 kernel: Write protecting the kernel read-only data: 36864k Jul 7 01:42:31.108540 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 01:42:31.108550 kernel: Run /init as init process Jul 7 01:42:31.108559 kernel: with arguments: Jul 7 01:42:31.108567 kernel: /init Jul 7 01:42:31.108576 kernel: with environment: Jul 7 01:42:31.108585 kernel: HOME=/ Jul 7 01:42:31.108593 kernel: TERM=linux Jul 7 01:42:31.108605 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 01:42:31.108617 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:42:31.108629 systemd[1]: Detected virtualization kvm. Jul 7 01:42:31.108639 systemd[1]: Detected architecture x86-64. Jul 7 01:42:31.108649 systemd[1]: Running in initrd. Jul 7 01:42:31.108658 systemd[1]: No hostname configured, using default hostname. Jul 7 01:42:31.108667 systemd[1]: Hostname set to . Jul 7 01:42:31.108680 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:42:31.108689 systemd[1]: Queued start job for default target initrd.target. Jul 7 01:42:31.108699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:42:31.108709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:42:31.108719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 01:42:31.108729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:42:31.108739 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 01:42:31.108760 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 01:42:31.108774 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 01:42:31.108784 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 01:42:31.108794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:42:31.108804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:42:31.108817 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:42:31.108826 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:42:31.108836 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:42:31.108846 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:42:31.108856 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:42:31.108866 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:42:31.108876 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:42:31.108886 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 01:42:31.108896 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:42:31.108909 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:42:31.108919 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:42:31.108929 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:42:31.108939 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 01:42:31.108949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:42:31.108959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 01:42:31.108969 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 01:42:31.108979 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:42:31.108991 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:42:31.109001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:42:31.109035 systemd-journald[184]: Collecting audit messages is disabled. Jul 7 01:42:31.109061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 01:42:31.109073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:42:31.109084 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 01:42:31.109095 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:42:31.109106 systemd-journald[184]: Journal started Jul 7 01:42:31.109133 systemd-journald[184]: Runtime Journal (/run/log/journal/5a4cb8c532514c88bc56767442e2547b) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:42:31.097738 systemd-modules-load[186]: Inserted module 'overlay' Jul 7 01:42:31.150846 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 01:42:31.150880 kernel: Bridge firewalling registered Jul 7 01:42:31.150893 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:42:31.134740 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 7 01:42:31.152862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:42:31.153639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:42:31.163671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:42:31.165893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:42:31.178286 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:42:31.182876 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:42:31.187316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:42:31.195112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:42:31.198840 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:42:31.200552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:42:31.206689 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 01:42:31.213681 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:42:31.215481 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:42:31.227768 dracut-cmdline[217]: dracut-dracut-053 Jul 7 01:42:31.233015 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 01:42:31.257701 systemd-resolved[218]: Positive Trust Anchors: Jul 7 01:42:31.257720 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:42:31.257762 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:42:31.260644 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 7 01:42:31.261810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:42:31.262492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:42:31.348605 kernel: SCSI subsystem initialized Jul 7 01:42:31.361530 kernel: Loading iSCSI transport class v2.0-870. Jul 7 01:42:31.375044 kernel: iscsi: registered transport (tcp) Jul 7 01:42:31.399565 kernel: iscsi: registered transport (qla4xxx) Jul 7 01:42:31.399675 kernel: QLogic iSCSI HBA Driver Jul 7 01:42:31.467960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 01:42:31.478630 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 01:42:31.534190 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 01:42:31.534296 kernel: device-mapper: uevent: version 1.0.3 Jul 7 01:42:31.536920 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 01:42:31.596517 kernel: raid6: sse2x4 gen() 5490 MB/s Jul 7 01:42:31.615518 kernel: raid6: sse2x2 gen() 6291 MB/s Jul 7 01:42:31.633982 kernel: raid6: sse2x1 gen() 8175 MB/s Jul 7 01:42:31.634064 kernel: raid6: using algorithm sse2x1 gen() 8175 MB/s Jul 7 01:42:31.652872 kernel: raid6: .... xor() 7139 MB/s, rmw enabled Jul 7 01:42:31.652928 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 01:42:31.675515 kernel: xor: measuring software checksum speed Jul 7 01:42:31.675587 kernel: prefetch64-sse : 16951 MB/sec Jul 7 01:42:31.678022 kernel: generic_sse : 16778 MB/sec Jul 7 01:42:31.678074 kernel: xor: using function: prefetch64-sse (16951 MB/sec) Jul 7 01:42:31.878602 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 01:42:31.898407 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:42:31.910766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:42:31.922903 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jul 7 01:42:31.927560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:42:31.944653 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 01:42:31.965893 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Jul 7 01:42:32.016052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:42:32.020682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:42:32.103243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:42:32.115549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 01:42:32.169955 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 01:42:32.173239 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:42:32.175251 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:42:32.177239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:42:32.184700 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 01:42:32.196812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:42:32.213501 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 7 01:42:32.217562 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 7 01:42:32.235379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 01:42:32.235476 kernel: GPT:17805311 != 20971519 Jul 7 01:42:32.235490 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 01:42:32.235502 kernel: GPT:17805311 != 20971519 Jul 7 01:42:32.236333 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 01:42:32.238862 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:42:32.239267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:42:32.239422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:42:32.241413 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:42:32.242353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:42:32.242513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:42:32.246431 kernel: libata version 3.00 loaded. Jul 7 01:42:32.243666 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:42:32.248693 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 7 01:42:32.253687 kernel: scsi host0: ata_piix Jul 7 01:42:32.253936 kernel: scsi host1: ata_piix Jul 7 01:42:32.254066 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 7 01:42:32.254081 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 7 01:42:32.257566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:42:32.326097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:42:32.334676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:42:32.347665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:42:32.442446 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Jul 7 01:42:32.446912 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Jul 7 01:42:32.461956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 01:42:32.469881 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 01:42:32.478037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:42:32.482931 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 01:42:32.483597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 01:42:32.492809 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 01:42:32.505236 disk-uuid[513]: Primary Header is updated. Jul 7 01:42:32.505236 disk-uuid[513]: Secondary Entries is updated. Jul 7 01:42:32.505236 disk-uuid[513]: Secondary Header is updated. Jul 7 01:42:32.514508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:42:32.520595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:42:33.541538 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 01:42:33.544281 disk-uuid[514]: The operation has completed successfully. Jul 7 01:42:33.720988 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 01:42:33.722636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 01:42:33.732789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 01:42:33.753495 sh[527]: Success Jul 7 01:42:33.784675 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 7 01:42:33.895031 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 01:42:33.930796 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 01:42:33.938252 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 01:42:33.970178 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 01:42:33.970255 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:42:33.974998 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 01:42:33.980000 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 01:42:33.983778 kernel: BTRFS info (device dm-0): using free space tree Jul 7 01:42:34.005169 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 01:42:34.007681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 01:42:34.017836 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 01:42:34.024745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 01:42:34.065486 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:42:34.065589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:42:34.065651 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:42:34.075510 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:42:34.093564 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 01:42:34.101734 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:42:34.114876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 01:42:34.121721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 01:42:34.162346 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:42:34.175684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:42:34.199243 systemd-networkd[712]: lo: Link UP Jul 7 01:42:34.199253 systemd-networkd[712]: lo: Gained carrier Jul 7 01:42:34.200675 systemd-networkd[712]: Enumeration completed Jul 7 01:42:34.200760 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:42:34.201548 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:42:34.201552 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:42:34.202425 systemd-networkd[712]: eth0: Link UP Jul 7 01:42:34.202430 systemd-networkd[712]: eth0: Gained carrier Jul 7 01:42:34.202437 systemd-networkd[712]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:42:34.205092 systemd[1]: Reached target network.target - Network. Jul 7 01:42:34.221502 systemd-networkd[712]: eth0: DHCPv4 address 172.24.4.32/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:42:34.409614 ignition[658]: Ignition 2.19.0 Jul 7 01:42:34.409641 ignition[658]: Stage: fetch-offline Jul 7 01:42:34.412508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:42:34.409712 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:34.409730 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:34.409882 ignition[658]: parsed url from cmdline: "" Jul 7 01:42:34.409887 ignition[658]: no config URL provided Jul 7 01:42:34.409893 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:42:34.409904 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:42:34.409910 ignition[658]: failed to fetch config: resource requires networking Jul 7 01:42:34.410162 ignition[658]: Ignition finished successfully Jul 7 01:42:34.426230 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 01:42:34.455725 ignition[721]: Ignition 2.19.0 Jul 7 01:42:34.456681 ignition[721]: Stage: fetch Jul 7 01:42:34.456926 ignition[721]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:34.456939 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:34.457060 ignition[721]: parsed url from cmdline: "" Jul 7 01:42:34.457065 ignition[721]: no config URL provided Jul 7 01:42:34.457071 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:42:34.457082 ignition[721]: no config at "/usr/lib/ignition/user.ign" Jul 7 01:42:34.457263 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 01:42:34.458340 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 01:42:34.458366 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 01:42:34.874564 ignition[721]: GET result: OK Jul 7 01:42:34.874993 ignition[721]: parsing config with SHA512: 7d9d06dc30d4c0986af368a27361a0c146f6f9318046e9211bfd4fa5e0c67d9f4eaafb9be7b20b3309d69724cd452657e1f5d4ff37eeb262245e55ef10abd76f Jul 7 01:42:34.897586 unknown[721]: fetched base config from "system" Jul 7 01:42:34.897601 unknown[721]: fetched base config from "system" Jul 7 01:42:34.897609 unknown[721]: fetched user config from "openstack" Jul 7 01:42:34.902115 ignition[721]: fetch: fetch complete Jul 7 01:42:34.902128 ignition[721]: fetch: fetch passed Jul 7 01:42:34.902209 ignition[721]: Ignition finished successfully Jul 7 01:42:34.909178 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 01:42:34.921123 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 01:42:34.938369 ignition[728]: Ignition 2.19.0 Jul 7 01:42:34.938389 ignition[728]: Stage: kargs Jul 7 01:42:34.939643 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:34.942032 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 01:42:34.939663 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:34.940799 ignition[728]: kargs: kargs passed Jul 7 01:42:34.940860 ignition[728]: Ignition finished successfully Jul 7 01:42:34.950931 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 01:42:34.968794 ignition[734]: Ignition 2.19.0 Jul 7 01:42:34.970420 ignition[734]: Stage: disks Jul 7 01:42:34.970941 ignition[734]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:34.970970 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:34.974870 ignition[734]: disks: disks passed Jul 7 01:42:34.974991 ignition[734]: Ignition finished successfully Jul 7 01:42:34.978025 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 01:42:34.980410 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 01:42:34.981928 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:42:34.984059 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:42:34.986045 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:42:34.987833 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:42:34.995763 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 01:42:35.027843 systemd-fsck[743]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 01:42:35.037910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 01:42:35.049724 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 01:42:35.237767 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 01:42:35.237950 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 01:42:35.239062 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 01:42:35.245534 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:42:35.247563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 01:42:35.249801 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 01:42:35.262538 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 01:42:35.264513 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 01:42:35.264575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:42:35.266835 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 01:42:35.278530 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (751) Jul 7 01:42:35.284222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 01:42:35.303089 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:42:35.303122 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:42:35.303137 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:42:35.303149 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:42:35.308553 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:42:35.379421 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 01:42:35.387579 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jul 7 01:42:35.399372 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 01:42:35.404823 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 01:42:35.476646 systemd-networkd[712]: eth0: Gained IPv6LL Jul 7 01:42:35.533565 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 01:42:35.540580 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 01:42:35.545638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 01:42:35.553067 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 01:42:35.555495 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:42:35.589393 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 01:42:35.597529 ignition[867]: INFO : Ignition 2.19.0 Jul 7 01:42:35.597529 ignition[867]: INFO : Stage: mount Jul 7 01:42:35.598898 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:35.598898 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:35.598898 ignition[867]: INFO : mount: mount passed Jul 7 01:42:35.601431 ignition[867]: INFO : Ignition finished successfully Jul 7 01:42:35.600129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 01:42:42.506063 coreos-metadata[753]: Jul 07 01:42:42.505 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:42:42.550020 coreos-metadata[753]: Jul 07 01:42:42.549 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:42:42.565151 coreos-metadata[753]: Jul 07 01:42:42.565 INFO Fetch successful Jul 7 01:42:42.566701 coreos-metadata[753]: Jul 07 01:42:42.565 INFO wrote hostname ci-4081-3-4-7-c803550fde.novalocal to /sysroot/etc/hostname Jul 7 01:42:42.571230 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 01:42:42.571819 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 01:42:42.582703 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 01:42:42.623900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:42:42.643688 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (886) Jul 7 01:42:42.651579 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 01:42:42.651652 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 01:42:42.655884 kernel: BTRFS info (device vda6): using free space tree Jul 7 01:42:42.667573 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 01:42:42.672638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:42:42.714194 ignition[904]: INFO : Ignition 2.19.0 Jul 7 01:42:42.715383 ignition[904]: INFO : Stage: files Jul 7 01:42:42.716379 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:42.716379 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:42.718860 ignition[904]: DEBUG : files: compiled without relabeling support, skipping Jul 7 01:42:42.720831 ignition[904]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 01:42:42.720831 ignition[904]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 01:42:42.727119 ignition[904]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 01:42:42.729625 ignition[904]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 01:42:42.729625 ignition[904]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 01:42:42.727688 unknown[904]: wrote ssh authorized keys file for user: core Jul 7 01:42:42.735661 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 01:42:42.735661 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 01:42:42.824029 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:42:43.269669 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 01:42:43.283165 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 01:42:44.027932 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 01:42:45.920074 ignition[904]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 01:42:45.923979 ignition[904]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 01:42:45.950984 ignition[904]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:42:45.955389 ignition[904]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:42:45.955389 ignition[904]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 01:42:45.955389 ignition[904]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 01:42:45.955389 ignition[904]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 01:42:45.955389 ignition[904]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:42:45.955389 ignition[904]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:42:45.955389 ignition[904]: INFO : files: files passed Jul 7 01:42:45.955389 ignition[904]: INFO : Ignition finished successfully Jul 7 01:42:45.957782 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 01:42:45.971693 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 01:42:45.974746 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 01:42:46.009311 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 01:42:46.009415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 01:42:46.025294 initrd-setup-root-after-ignition[933]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:42:46.025294 initrd-setup-root-after-ignition[933]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:42:46.029592 initrd-setup-root-after-ignition[937]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:42:46.031765 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:42:46.035312 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 01:42:46.049849 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 01:42:46.089688 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 01:42:46.089983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 01:42:46.093123 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 01:42:46.095669 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 01:42:46.108690 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 01:42:46.114756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 01:42:46.146074 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:42:46.154761 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 01:42:46.183733 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:42:46.185955 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:42:46.189557 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 01:42:46.192222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 01:42:46.192587 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:42:46.195686 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 01:42:46.197997 systemd[1]: Stopped target basic.target - Basic System. Jul 7 01:42:46.200849 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 01:42:46.203187 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:42:46.205828 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 01:42:46.209063 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 01:42:46.212135 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:42:46.215615 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 01:42:46.218439 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 01:42:46.221295 systemd[1]: Stopped target swap.target - Swaps. Jul 7 01:42:46.223899 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 01:42:46.224337 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:42:46.227583 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:42:46.229424 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:42:46.231050 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 01:42:46.231900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:42:46.233369 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 01:42:46.233601 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 01:42:46.236646 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 01:42:46.236837 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:42:46.237856 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 01:42:46.238028 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 01:42:46.248560 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 01:42:46.252747 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 01:42:46.253822 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 01:42:46.255591 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:42:46.258837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 01:42:46.259572 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:42:46.270314 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 01:42:46.271405 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 01:42:46.291355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 01:42:46.293618 ignition[957]: INFO : Ignition 2.19.0 Jul 7 01:42:46.293618 ignition[957]: INFO : Stage: umount Jul 7 01:42:46.295516 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:42:46.295516 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 01:42:46.297790 ignition[957]: INFO : umount: umount passed Jul 7 01:42:46.297790 ignition[957]: INFO : Ignition finished successfully Jul 7 01:42:46.297410 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 01:42:46.297534 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 01:42:46.298858 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 01:42:46.298991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 01:42:46.300256 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 01:42:46.300371 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 01:42:46.301505 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 01:42:46.301575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 01:42:46.302439 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 01:42:46.302541 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 01:42:46.303499 systemd[1]: Stopped target network.target - Network. Jul 7 01:42:46.304488 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 01:42:46.304577 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:42:46.305553 systemd[1]: Stopped target paths.target - Path Units. Jul 7 01:42:46.306516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 01:42:46.310495 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:42:46.311086 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 01:42:46.312085 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 01:42:46.313243 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 01:42:46.313283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:42:46.314516 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 01:42:46.314553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:42:46.315514 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 01:42:46.315585 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 01:42:46.316554 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 01:42:46.316598 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 01:42:46.317577 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 01:42:46.317621 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 01:42:46.318803 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 01:42:46.320123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 01:42:46.326545 systemd-networkd[712]: eth0: DHCPv6 lease lost Jul 7 01:42:46.328759 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 01:42:46.328949 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 01:42:46.331289 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 01:42:46.331664 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 01:42:46.334949 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 01:42:46.335025 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:42:46.338731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 01:42:46.339873 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 01:42:46.339976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:42:46.340630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:42:46.340675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:42:46.341202 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 01:42:46.341244 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 01:42:46.341815 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 01:42:46.341858 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:42:46.347260 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:42:46.358443 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 01:42:46.358598 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 01:42:46.360811 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 01:42:46.360953 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:42:46.362646 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 01:42:46.362695 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 01:42:46.363946 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 01:42:46.363980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:42:46.365160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 01:42:46.365210 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:42:46.366810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 01:42:46.366854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 01:42:46.367838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:42:46.367914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:42:46.377697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 01:42:46.378565 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 01:42:46.378627 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:42:46.379231 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 01:42:46.379278 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:42:46.379842 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 01:42:46.379886 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:42:46.380519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:42:46.380567 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:42:46.386108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 01:42:46.386245 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 01:42:46.387389 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 01:42:46.393835 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 01:42:46.407171 systemd[1]: Switching root. Jul 7 01:42:46.440966 systemd-journald[184]: Journal stopped Jul 7 01:42:47.983850 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 7 01:42:47.991680 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 01:42:47.991734 kernel: SELinux: policy capability open_perms=1 Jul 7 01:42:47.991750 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 01:42:47.991762 kernel: SELinux: policy capability always_check_network=0 Jul 7 01:42:47.991783 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 01:42:47.991805 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 01:42:47.991822 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 01:42:47.991839 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 01:42:47.991851 kernel: audit: type=1403 audit(1751852566.921:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 01:42:47.991864 systemd[1]: Successfully loaded SELinux policy in 87.329ms. Jul 7 01:42:47.991950 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.094ms. Jul 7 01:42:47.991967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 01:42:47.991986 systemd[1]: Detected virtualization kvm. Jul 7 01:42:47.992001 systemd[1]: Detected architecture x86-64. Jul 7 01:42:47.992019 systemd[1]: Detected first boot. Jul 7 01:42:47.992038 systemd[1]: Hostname set to . Jul 7 01:42:47.992050 systemd[1]: Initializing machine ID from VM UUID. Jul 7 01:42:47.992067 zram_generator::config[1000]: No configuration found. Jul 7 01:42:47.992085 systemd[1]: Populated /etc with preset unit settings. Jul 7 01:42:47.992098 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 01:42:47.992120 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 01:42:47.992137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 01:42:47.992157 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 01:42:47.992169 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 01:42:47.992182 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 01:42:47.992199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 01:42:47.992212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 01:42:47.992229 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 01:42:47.992242 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 01:42:47.992255 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 01:42:47.992270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:42:47.992282 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:42:47.992294 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 01:42:47.992306 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 01:42:47.992319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 01:42:47.992332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:42:47.992354 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 01:42:47.992389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:42:47.992402 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 01:42:47.992418 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 01:42:47.992430 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 01:42:47.992487 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 01:42:47.992523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:42:47.992536 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:42:47.992548 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:42:47.992564 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:42:47.992577 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 01:42:47.992591 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 01:42:47.992603 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:42:47.992616 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:42:47.992629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:42:47.992641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 01:42:47.992659 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 01:42:47.992672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 01:42:47.992687 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 01:42:47.992699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:42:47.992711 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 01:42:47.992724 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 01:42:47.992736 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 01:42:47.992749 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 01:42:47.992762 systemd[1]: Reached target machines.target - Containers. Jul 7 01:42:47.992774 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 01:42:47.992794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:42:47.992807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:42:47.992819 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 01:42:47.992832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:42:47.992844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:42:47.992856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:42:47.992869 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 01:42:47.992881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:42:47.992894 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 01:42:47.992908 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 01:42:47.992920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 01:42:47.992933 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 01:42:47.992945 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 01:42:47.992957 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:42:47.992979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:42:47.992992 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:42:47.993004 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 01:42:47.993021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:42:47.993037 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 01:42:47.993049 systemd[1]: Stopped verity-setup.service. Jul 7 01:42:47.993061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:42:47.993074 kernel: loop: module loaded Jul 7 01:42:47.993086 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 01:42:47.993098 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 01:42:47.993111 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 01:42:47.993124 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 01:42:47.993162 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 01:42:47.993176 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 01:42:47.993213 systemd-journald[1093]: Collecting audit messages is disabled. Jul 7 01:42:47.993291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:42:47.993325 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 01:42:47.993339 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 01:42:47.993352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:42:47.993365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:42:47.993398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:42:47.993429 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:42:47.993443 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:42:47.996528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:42:47.996546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:42:47.996559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:42:47.996571 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:42:47.996586 systemd-journald[1093]: Journal started Jul 7 01:42:47.996618 systemd-journald[1093]: Runtime Journal (/run/log/journal/5a4cb8c532514c88bc56767442e2547b) is 8.0M, max 78.3M, 70.3M free. Jul 7 01:42:48.001567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 01:42:47.626846 systemd[1]: Queued start job for default target multi-user.target. Jul 7 01:42:47.652518 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 01:42:47.652943 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 01:42:48.005519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:42:48.017472 kernel: fuse: init (API version 7.39) Jul 7 01:42:48.017514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:42:48.023524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:42:48.023564 kernel: ACPI: bus type drm_connector registered Jul 7 01:42:48.027492 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:42:48.027885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 01:42:48.028409 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 01:42:48.029246 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 01:42:48.030663 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:42:48.030805 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:42:48.031440 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 01:42:48.040702 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 01:42:48.061282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 01:42:48.061912 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 01:42:48.061957 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:42:48.065746 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 01:42:48.075647 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 01:42:48.078621 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 01:42:48.080072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:42:48.083603 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 01:42:48.085203 systemd-tmpfiles[1110]: ACLs are not supported, ignoring. Jul 7 01:42:48.085550 systemd-tmpfiles[1110]: ACLs are not supported, ignoring. Jul 7 01:42:48.091690 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 01:42:48.092338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:42:48.096408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 01:42:48.099305 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 01:42:48.104490 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:42:48.107997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:42:48.109124 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 01:42:48.110582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 01:42:48.127592 systemd-journald[1093]: Time spent on flushing to /var/log/journal/5a4cb8c532514c88bc56767442e2547b is 66.300ms for 948 entries. Jul 7 01:42:48.127592 systemd-journald[1093]: System Journal (/var/log/journal/5a4cb8c532514c88bc56767442e2547b) is 8.0M, max 584.8M, 576.8M free. Jul 7 01:42:48.215468 systemd-journald[1093]: Received client request to flush runtime journal. Jul 7 01:42:48.215522 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 01:42:48.125829 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 01:42:48.126739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:42:48.137691 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 01:42:48.177801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 01:42:48.182117 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 01:42:48.191229 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 01:42:48.194276 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 01:42:48.221209 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 01:42:48.239494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 01:42:48.270393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 01:42:48.271492 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 01:42:48.274489 kernel: loop1: detected capacity change from 0 to 140768 Jul 7 01:42:48.275124 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 01:42:48.286802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:42:48.335686 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 01:42:48.335706 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jul 7 01:42:48.346213 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:42:48.350674 kernel: loop2: detected capacity change from 0 to 224512 Jul 7 01:42:48.407728 kernel: loop3: detected capacity change from 0 to 8 Jul 7 01:42:48.429554 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 01:42:48.530490 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 01:42:48.572489 kernel: loop6: detected capacity change from 0 to 224512 Jul 7 01:42:48.634602 kernel: loop7: detected capacity change from 0 to 8 Jul 7 01:42:48.633650 (sd-merge)[1161]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 01:42:48.634118 (sd-merge)[1161]: Merged extensions into '/usr'. Jul 7 01:42:48.643607 systemd[1]: Reloading requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 01:42:48.643644 systemd[1]: Reloading... Jul 7 01:42:48.726487 zram_generator::config[1183]: No configuration found. Jul 7 01:42:49.034201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:42:49.089229 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 01:42:49.093977 systemd[1]: Reloading finished in 449 ms. Jul 7 01:42:49.130582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 01:42:49.131666 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 01:42:49.132559 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 01:42:49.141624 systemd[1]: Starting ensure-sysext.service... Jul 7 01:42:49.143663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:42:49.152645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:42:49.177590 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jul 7 01:42:49.177606 systemd[1]: Reloading... Jul 7 01:42:49.193937 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 01:42:49.194295 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 01:42:49.195199 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 01:42:49.198209 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 7 01:42:49.198346 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jul 7 01:42:49.201920 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Jul 7 01:42:49.204028 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:42:49.204566 systemd-tmpfiles[1245]: Skipping /boot Jul 7 01:42:49.217045 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:42:49.217058 systemd-tmpfiles[1245]: Skipping /boot Jul 7 01:42:49.289486 zram_generator::config[1289]: No configuration found. Jul 7 01:42:49.402482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1295) Jul 7 01:42:49.428487 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 7 01:42:49.437482 kernel: ACPI: button: Power Button [PWRF] Jul 7 01:42:49.460481 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 7 01:42:49.491481 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 01:42:49.550945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:42:49.555663 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 01:42:49.580089 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 7 01:42:49.580193 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 7 01:42:49.584822 kernel: Console: switching to colour dummy device 80x25 Jul 7 01:42:49.586747 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 7 01:42:49.586791 kernel: [drm] features: -context_init Jul 7 01:42:49.588835 kernel: [drm] number of scanouts: 1 Jul 7 01:42:49.588881 kernel: [drm] number of cap sets: 0 Jul 7 01:42:49.593556 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 7 01:42:49.604468 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 7 01:42:49.604570 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 01:42:49.611017 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 7 01:42:49.633212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 01:42:49.636648 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 01:42:49.636882 systemd[1]: Reloading finished in 458 ms. Jul 7 01:42:49.655050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:42:49.655680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:42:49.697978 systemd[1]: Finished ensure-sysext.service. Jul 7 01:42:49.703761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:42:49.711725 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:42:49.714714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 01:42:49.716759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:42:49.723757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:42:49.728268 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:42:49.737553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:42:49.739084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:42:49.739299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:42:49.741064 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 01:42:49.754803 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 01:42:49.758685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:42:49.766844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:42:49.772714 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 01:42:49.777734 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 01:42:49.779717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:42:49.780814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 01:42:49.785379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:42:49.785624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:42:49.786957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:42:49.787078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:42:49.790376 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:42:49.791727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:42:49.795929 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 01:42:49.802547 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 01:42:49.814736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 01:42:49.820388 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:42:49.825640 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 01:42:49.838265 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:42:49.838537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:42:49.843375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:42:49.859597 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 01:42:49.873221 lvm[1392]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:42:49.877969 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 01:42:49.884153 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 01:42:49.907273 augenrules[1405]: No rules Jul 7 01:42:49.910772 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:42:49.915189 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 01:42:49.920723 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 01:42:49.930315 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:42:49.942767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 01:42:49.949272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 01:42:49.953920 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 01:42:49.966923 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 01:42:49.972944 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 01:42:49.986537 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 01:42:50.032562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:42:50.063466 systemd-resolved[1379]: Positive Trust Anchors: Jul 7 01:42:50.063489 systemd-resolved[1379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:42:50.063535 systemd-resolved[1379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:42:50.069555 systemd-networkd[1378]: lo: Link UP Jul 7 01:42:50.069565 systemd-networkd[1378]: lo: Gained carrier Jul 7 01:42:50.071044 systemd-networkd[1378]: Enumeration completed Jul 7 01:42:50.071163 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:42:50.073096 systemd-resolved[1379]: Using system hostname 'ci-4081-3-4-7-c803550fde.novalocal'. Jul 7 01:42:50.077799 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:42:50.077813 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:42:50.079888 systemd-networkd[1378]: eth0: Link UP Jul 7 01:42:50.079898 systemd-networkd[1378]: eth0: Gained carrier Jul 7 01:42:50.079933 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 01:42:50.082693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 01:42:50.088325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:42:50.089074 systemd[1]: Reached target network.target - Network. Jul 7 01:42:50.092606 systemd-networkd[1378]: eth0: DHCPv4 address 172.24.4.32/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 7 01:42:50.093497 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jul 7 01:42:50.093565 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:42:50.094177 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 01:42:50.100103 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:42:50.101224 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 01:42:50.101778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 01:42:50.102284 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 01:42:50.104846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 01:42:50.104889 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:42:50.105406 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 01:42:50.106805 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 01:42:50.109640 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 01:42:50.111843 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:42:50.939489 systemd-timesyncd[1381]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jul 7 01:42:50.939559 systemd-timesyncd[1381]: Initial clock synchronization to Mon 2025-07-07 01:42:50.939353 UTC. Jul 7 01:42:50.939845 systemd-resolved[1379]: Clock change detected. Flushing caches. Jul 7 01:42:50.942522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 01:42:50.949109 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 01:42:50.957041 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 01:42:50.958537 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 01:42:50.961320 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:42:50.961884 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:42:50.965940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:42:50.965979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:42:50.971393 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 01:42:50.975331 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 01:42:50.984507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 01:42:50.992439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 01:42:51.002087 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 01:42:51.004025 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 01:42:51.008700 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 01:42:51.020491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 01:42:51.022372 jq[1434]: false Jul 7 01:42:51.032508 dbus-daemon[1433]: [system] SELinux support is enabled Jul 7 01:42:51.036615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 01:42:51.041614 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 01:42:51.055882 extend-filesystems[1436]: Found loop4 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found loop5 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found loop6 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found loop7 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda1 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda2 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda3 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found usr Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda4 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda6 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda7 Jul 7 01:42:51.055882 extend-filesystems[1436]: Found vda9 Jul 7 01:42:51.055882 extend-filesystems[1436]: Checking size of /dev/vda9 Jul 7 01:42:51.175155 extend-filesystems[1436]: Resized partition /dev/vda9 Jul 7 01:42:51.056653 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 01:42:51.176162 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jul 7 01:42:51.257714 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 7 01:42:51.257777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1279) Jul 7 01:42:51.257798 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 7 01:42:51.057871 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 01:42:51.062544 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 01:42:51.064492 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 01:42:51.258306 jq[1454]: true Jul 7 01:42:51.080948 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 01:42:51.088829 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 01:42:51.258679 update_engine[1451]: I20250707 01:42:51.196949 1451 main.cc:92] Flatcar Update Engine starting Jul 7 01:42:51.258679 update_engine[1451]: I20250707 01:42:51.202567 1451 update_check_scheduler.cc:74] Next update check in 9m45s Jul 7 01:42:51.105098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 01:42:51.259029 tar[1458]: linux-amd64/LICENSE Jul 7 01:42:51.259029 tar[1458]: linux-amd64/helm Jul 7 01:42:51.106239 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 01:42:51.266470 jq[1461]: true Jul 7 01:42:51.266679 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 01:42:51.266679 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 01:42:51.266679 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 7 01:42:51.106600 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 01:42:51.283883 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jul 7 01:42:51.108849 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 01:42:51.123004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 01:42:51.124056 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 01:42:51.156353 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 01:42:51.156386 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 01:42:51.161340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 01:42:51.161378 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 01:42:51.189444 systemd-logind[1449]: New seat seat0. Jul 7 01:42:51.202437 systemd[1]: Started update-engine.service - Update Engine. Jul 7 01:42:51.224066 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 01:42:51.224786 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 01:42:51.259763 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jul 7 01:42:51.259781 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 01:42:51.260480 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 01:42:51.277595 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 01:42:51.277798 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 01:42:51.317999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 01:42:51.377533 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:42:51.380703 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 01:42:51.395649 systemd[1]: Starting sshkeys.service... Jul 7 01:42:51.428425 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 01:42:51.437715 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 01:42:51.523749 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 01:42:51.702315 containerd[1463]: time="2025-07-07T01:42:51.701670851Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 01:42:51.773299 containerd[1463]: time="2025-07-07T01:42:51.772740702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.778745775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.778792883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.778815876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779026120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779047520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779119996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779137109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779374574Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779396735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779419107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780659 containerd[1463]: time="2025-07-07T01:42:51.779434386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.779534824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.779769094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.779886825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.779905369Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.780007000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 01:42:51.780991 containerd[1463]: time="2025-07-07T01:42:51.780059809Z" level=info msg="metadata content store policy set" policy=shared Jul 7 01:42:51.796865 containerd[1463]: time="2025-07-07T01:42:51.796829130Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 01:42:51.797027 containerd[1463]: time="2025-07-07T01:42:51.797006833Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 01:42:51.797179 containerd[1463]: time="2025-07-07T01:42:51.797163377Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 01:42:51.797522 containerd[1463]: time="2025-07-07T01:42:51.797505378Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 01:42:51.797602 containerd[1463]: time="2025-07-07T01:42:51.797585579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 01:42:51.797812 containerd[1463]: time="2025-07-07T01:42:51.797790313Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799225494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799383190Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799413236Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799435348Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799456498Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799477327Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799497665Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799521399Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799539974Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799563388Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799585580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799606258Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799636305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.800846 containerd[1463]: time="2025-07-07T01:42:51.799657535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799678394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799699433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799715353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799736753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799756831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799778662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799802957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799827313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799847210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799863511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799888959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799932360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.799966935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.800231741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.801155 containerd[1463]: time="2025-07-07T01:42:51.800257129Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 01:42:51.805126 containerd[1463]: time="2025-07-07T01:42:51.805073412Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 01:42:51.805191 containerd[1463]: time="2025-07-07T01:42:51.805161617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 01:42:51.805191 containerd[1463]: time="2025-07-07T01:42:51.805187125Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 01:42:51.805255 containerd[1463]: time="2025-07-07T01:42:51.805216170Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 01:42:51.805255 containerd[1463]: time="2025-07-07T01:42:51.805230777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.805322 containerd[1463]: time="2025-07-07T01:42:51.805253139Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 01:42:51.805322 containerd[1463]: time="2025-07-07T01:42:51.805268207Z" level=info msg="NRI interface is disabled by configuration." Jul 7 01:42:51.805322 containerd[1463]: time="2025-07-07T01:42:51.805293314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 01:42:51.807624 containerd[1463]: time="2025-07-07T01:42:51.807518969Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 01:42:51.807811 containerd[1463]: time="2025-07-07T01:42:51.807643893Z" level=info msg="Connect containerd service" Jul 7 01:42:51.807811 containerd[1463]: time="2025-07-07T01:42:51.807718683Z" level=info msg="using legacy CRI server" Jul 7 01:42:51.807811 containerd[1463]: time="2025-07-07T01:42:51.807731217Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 01:42:51.807932 containerd[1463]: time="2025-07-07T01:42:51.807891928Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 01:42:51.810735 containerd[1463]: time="2025-07-07T01:42:51.810682291Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:42:51.810935 containerd[1463]: time="2025-07-07T01:42:51.810900391Z" level=info msg="Start subscribing containerd event" Jul 7 01:42:51.811150 containerd[1463]: time="2025-07-07T01:42:51.811076631Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 01:42:51.811244 containerd[1463]: time="2025-07-07T01:42:51.811216473Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 01:42:51.811279 containerd[1463]: time="2025-07-07T01:42:51.811135812Z" level=info msg="Start recovering state" Jul 7 01:42:51.811393 containerd[1463]: time="2025-07-07T01:42:51.811368328Z" level=info msg="Start event monitor" Jul 7 01:42:51.811429 containerd[1463]: time="2025-07-07T01:42:51.811397633Z" level=info msg="Start snapshots syncer" Jul 7 01:42:51.811429 containerd[1463]: time="2025-07-07T01:42:51.811409596Z" level=info msg="Start cni network conf syncer for default" Jul 7 01:42:51.811429 containerd[1463]: time="2025-07-07T01:42:51.811420095Z" level=info msg="Start streaming server" Jul 7 01:42:51.811605 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 01:42:51.814505 containerd[1463]: time="2025-07-07T01:42:51.813685524Z" level=info msg="containerd successfully booted in 0.114570s" Jul 7 01:42:51.982431 systemd-networkd[1378]: eth0: Gained IPv6LL Jul 7 01:42:51.985370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 01:42:51.989763 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 01:42:52.002625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:42:52.006790 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 01:42:52.025238 tar[1458]: linux-amd64/README.md Jul 7 01:42:52.050741 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 01:42:52.067719 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 01:42:52.134217 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 01:42:52.167154 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 01:42:52.181655 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 01:42:52.185948 systemd[1]: Started sshd@0-172.24.4.32:22-172.24.4.1:48890.service - OpenSSH per-connection server daemon (172.24.4.1:48890). Jul 7 01:42:52.200175 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 01:42:52.200388 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 01:42:52.211821 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 01:42:52.232908 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 01:42:52.245828 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 01:42:52.252708 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 01:42:52.257741 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 01:42:53.101544 sshd[1533]: Accepted publickey for core from 172.24.4.1 port 48890 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:42:53.105067 sshd[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:42:53.131846 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 01:42:53.132277 systemd-logind[1449]: New session 1 of user core. Jul 7 01:42:53.139681 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 01:42:53.196003 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 01:42:53.215881 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 01:42:53.241229 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 01:42:53.385930 systemd[1545]: Queued start job for default target default.target. Jul 7 01:42:53.397256 systemd[1545]: Created slice app.slice - User Application Slice. Jul 7 01:42:53.397628 systemd[1545]: Reached target paths.target - Paths. Jul 7 01:42:53.397651 systemd[1545]: Reached target timers.target - Timers. Jul 7 01:42:53.400420 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 01:42:53.411207 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 01:42:53.411270 systemd[1545]: Reached target sockets.target - Sockets. Jul 7 01:42:53.411325 systemd[1545]: Reached target basic.target - Basic System. Jul 7 01:42:53.411474 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 01:42:53.411692 systemd[1545]: Reached target default.target - Main User Target. Jul 7 01:42:53.411729 systemd[1545]: Startup finished in 158ms. Jul 7 01:42:53.425623 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 01:42:53.883247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:42:53.894103 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:42:53.918146 systemd[1]: Started sshd@1-172.24.4.32:22-172.24.4.1:39202.service - OpenSSH per-connection server daemon (172.24.4.1:39202). Jul 7 01:42:55.162975 kubelet[1559]: E0707 01:42:55.162837 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:42:55.167943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:42:55.168239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:42:55.169099 systemd[1]: kubelet.service: Consumed 1.974s CPU time. Jul 7 01:42:55.473194 sshd[1562]: Accepted publickey for core from 172.24.4.1 port 39202 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:42:55.477089 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:42:55.489221 systemd-logind[1449]: New session 2 of user core. Jul 7 01:42:55.501172 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 01:42:56.071116 sshd[1562]: pam_unix(sshd:session): session closed for user core Jul 7 01:42:56.084816 systemd[1]: sshd@1-172.24.4.32:22-172.24.4.1:39202.service: Deactivated successfully. Jul 7 01:42:56.088660 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 01:42:56.092625 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jul 7 01:42:56.104070 systemd[1]: Started sshd@2-172.24.4.32:22-172.24.4.1:39208.service - OpenSSH per-connection server daemon (172.24.4.1:39208). Jul 7 01:42:56.111480 systemd-logind[1449]: Removed session 2. Jul 7 01:42:57.240331 sshd[1577]: Accepted publickey for core from 172.24.4.1 port 39208 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:42:57.244638 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:42:57.264497 systemd-logind[1449]: New session 3 of user core. Jul 7 01:42:57.272981 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 01:42:57.329979 login[1540]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:42:57.336034 login[1541]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 01:42:57.344035 systemd-logind[1449]: New session 5 of user core. Jul 7 01:42:57.356546 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 01:42:57.360631 systemd-logind[1449]: New session 4 of user core. Jul 7 01:42:57.367473 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 01:42:57.848951 sshd[1577]: pam_unix(sshd:session): session closed for user core Jul 7 01:42:57.857143 systemd[1]: sshd@2-172.24.4.32:22-172.24.4.1:39208.service: Deactivated successfully. Jul 7 01:42:57.861099 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 01:42:57.862911 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jul 7 01:42:57.865618 systemd-logind[1449]: Removed session 3. Jul 7 01:42:58.085807 coreos-metadata[1432]: Jul 07 01:42:58.085 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:42:58.139183 coreos-metadata[1432]: Jul 07 01:42:58.138 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 01:42:58.532074 coreos-metadata[1432]: Jul 07 01:42:58.531 INFO Fetch successful Jul 7 01:42:58.532667 coreos-metadata[1432]: Jul 07 01:42:58.532 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 01:42:58.546072 coreos-metadata[1432]: Jul 07 01:42:58.545 INFO Fetch successful Jul 7 01:42:58.546460 coreos-metadata[1432]: Jul 07 01:42:58.546 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 01:42:58.549089 coreos-metadata[1495]: Jul 07 01:42:58.549 WARN failed to locate config-drive, using the metadata service API instead Jul 7 01:42:58.560575 coreos-metadata[1432]: Jul 07 01:42:58.560 INFO Fetch successful Jul 7 01:42:58.560575 coreos-metadata[1432]: Jul 07 01:42:58.560 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 01:42:58.574058 coreos-metadata[1432]: Jul 07 01:42:58.573 INFO Fetch successful Jul 7 01:42:58.574249 coreos-metadata[1432]: Jul 07 01:42:58.574 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 01:42:58.588350 coreos-metadata[1432]: Jul 07 01:42:58.588 INFO Fetch successful Jul 7 01:42:58.588350 coreos-metadata[1432]: Jul 07 01:42:58.588 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 01:42:58.591615 coreos-metadata[1495]: Jul 07 01:42:58.591 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 01:42:58.602585 coreos-metadata[1432]: Jul 07 01:42:58.602 INFO Fetch successful Jul 7 01:42:58.610645 coreos-metadata[1495]: Jul 07 01:42:58.610 INFO Fetch successful Jul 7 01:42:58.610645 coreos-metadata[1495]: Jul 07 01:42:58.610 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 01:42:58.624912 coreos-metadata[1495]: Jul 07 01:42:58.624 INFO Fetch successful Jul 7 01:42:58.632031 unknown[1495]: wrote ssh authorized keys file for user: core Jul 7 01:42:58.652869 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 01:42:58.657735 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 01:42:58.684164 update-ssh-keys[1619]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:42:58.685372 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 01:42:58.690323 systemd[1]: Finished sshkeys.service. Jul 7 01:42:58.696510 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 01:42:58.696940 systemd[1]: Startup finished in 2.083s (kernel) + 16.135s (initrd) + 11.035s (userspace) = 29.254s. Jul 7 01:43:05.253901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 01:43:05.296599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:05.733678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:05.734147 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:43:05.846485 kubelet[1633]: E0707 01:43:05.846381 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:43:05.856235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:43:05.856488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:43:07.884963 systemd[1]: Started sshd@3-172.24.4.32:22-172.24.4.1:45538.service - OpenSSH per-connection server daemon (172.24.4.1:45538). Jul 7 01:43:09.021805 sshd[1641]: Accepted publickey for core from 172.24.4.1 port 45538 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:09.026161 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:09.044785 systemd-logind[1449]: New session 6 of user core. Jul 7 01:43:09.056677 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 01:43:09.760770 sshd[1641]: pam_unix(sshd:session): session closed for user core Jul 7 01:43:09.770773 systemd[1]: sshd@3-172.24.4.32:22-172.24.4.1:45538.service: Deactivated successfully. Jul 7 01:43:09.774378 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 01:43:09.776455 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jul 7 01:43:09.783888 systemd[1]: Started sshd@4-172.24.4.32:22-172.24.4.1:45552.service - OpenSSH per-connection server daemon (172.24.4.1:45552). Jul 7 01:43:09.786630 systemd-logind[1449]: Removed session 6. Jul 7 01:43:11.248810 sshd[1648]: Accepted publickey for core from 172.24.4.1 port 45552 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:11.252220 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:11.267387 systemd-logind[1449]: New session 7 of user core. Jul 7 01:43:11.281630 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 01:43:11.888243 sshd[1648]: pam_unix(sshd:session): session closed for user core Jul 7 01:43:11.907905 systemd[1]: sshd@4-172.24.4.32:22-172.24.4.1:45552.service: Deactivated successfully. Jul 7 01:43:11.911264 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 01:43:11.913079 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jul 7 01:43:11.924965 systemd[1]: Started sshd@5-172.24.4.32:22-172.24.4.1:45566.service - OpenSSH per-connection server daemon (172.24.4.1:45566). Jul 7 01:43:11.927517 systemd-logind[1449]: Removed session 7. Jul 7 01:43:13.018463 sshd[1655]: Accepted publickey for core from 172.24.4.1 port 45566 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:13.022027 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:13.034603 systemd-logind[1449]: New session 8 of user core. Jul 7 01:43:13.047643 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 01:43:13.639775 sshd[1655]: pam_unix(sshd:session): session closed for user core Jul 7 01:43:13.653016 systemd[1]: sshd@5-172.24.4.32:22-172.24.4.1:45566.service: Deactivated successfully. Jul 7 01:43:13.657991 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 01:43:13.660415 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jul 7 01:43:13.688244 systemd[1]: Started sshd@6-172.24.4.32:22-172.24.4.1:35626.service - OpenSSH per-connection server daemon (172.24.4.1:35626). Jul 7 01:43:13.691270 systemd-logind[1449]: Removed session 8. Jul 7 01:43:14.831992 sshd[1662]: Accepted publickey for core from 172.24.4.1 port 35626 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:14.835593 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:14.853763 systemd-logind[1449]: New session 9 of user core. Jul 7 01:43:14.861700 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 01:43:15.343528 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 01:43:15.344278 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:43:15.373846 sudo[1665]: pam_unix(sudo:session): session closed for user root Jul 7 01:43:15.571017 sshd[1662]: pam_unix(sshd:session): session closed for user core Jul 7 01:43:15.582397 systemd[1]: sshd@6-172.24.4.32:22-172.24.4.1:35626.service: Deactivated successfully. Jul 7 01:43:15.585977 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 01:43:15.588232 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jul 7 01:43:15.598924 systemd[1]: Started sshd@7-172.24.4.32:22-172.24.4.1:35636.service - OpenSSH per-connection server daemon (172.24.4.1:35636). Jul 7 01:43:15.603185 systemd-logind[1449]: Removed session 9. Jul 7 01:43:15.998898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 01:43:16.006774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:16.467266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:16.481700 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:43:16.603964 kubelet[1680]: E0707 01:43:16.603861 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:43:16.606066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:43:16.606230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:43:17.042951 sshd[1670]: Accepted publickey for core from 172.24.4.1 port 35636 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:17.046995 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:17.063415 systemd-logind[1449]: New session 10 of user core. Jul 7 01:43:17.072638 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 01:43:17.520005 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 01:43:17.520801 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:43:17.531137 sudo[1690]: pam_unix(sudo:session): session closed for user root Jul 7 01:43:17.544465 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 01:43:17.545186 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:43:17.577861 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 01:43:17.600597 auditctl[1693]: No rules Jul 7 01:43:17.601508 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:43:17.602023 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 01:43:17.612161 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 01:43:17.707571 augenrules[1711]: No rules Jul 7 01:43:17.708989 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 01:43:17.712240 sudo[1689]: pam_unix(sudo:session): session closed for user root Jul 7 01:43:17.920542 sshd[1670]: pam_unix(sshd:session): session closed for user core Jul 7 01:43:17.934648 systemd[1]: sshd@7-172.24.4.32:22-172.24.4.1:35636.service: Deactivated successfully. Jul 7 01:43:17.938546 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 01:43:17.942460 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jul 7 01:43:17.949906 systemd[1]: Started sshd@8-172.24.4.32:22-172.24.4.1:35638.service - OpenSSH per-connection server daemon (172.24.4.1:35638). Jul 7 01:43:17.953729 systemd-logind[1449]: Removed session 10. Jul 7 01:43:19.051453 sshd[1719]: Accepted publickey for core from 172.24.4.1 port 35638 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:43:19.054600 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:43:19.066107 systemd-logind[1449]: New session 11 of user core. Jul 7 01:43:19.075641 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 01:43:19.523816 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 01:43:19.524482 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:43:20.519657 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 01:43:20.519789 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 01:43:21.188778 dockerd[1739]: time="2025-07-07T01:43:21.188690889Z" level=info msg="Starting up" Jul 7 01:43:21.389167 systemd[1]: var-lib-docker-metacopy\x2dcheck1248082852-merged.mount: Deactivated successfully. Jul 7 01:43:21.415851 dockerd[1739]: time="2025-07-07T01:43:21.415797037Z" level=info msg="Loading containers: start." Jul 7 01:43:21.603690 kernel: Initializing XFRM netlink socket Jul 7 01:43:21.749833 systemd-networkd[1378]: docker0: Link UP Jul 7 01:43:21.767779 dockerd[1739]: time="2025-07-07T01:43:21.767707635Z" level=info msg="Loading containers: done." Jul 7 01:43:21.794339 dockerd[1739]: time="2025-07-07T01:43:21.794029211Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 01:43:21.794339 dockerd[1739]: time="2025-07-07T01:43:21.794180485Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 01:43:21.794907 dockerd[1739]: time="2025-07-07T01:43:21.794394807Z" level=info msg="Daemon has completed initialization" Jul 7 01:43:21.846636 dockerd[1739]: time="2025-07-07T01:43:21.846417275Z" level=info msg="API listen on /run/docker.sock" Jul 7 01:43:21.847265 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 01:43:23.659864 containerd[1463]: time="2025-07-07T01:43:23.659420576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 01:43:24.539031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608211996.mount: Deactivated successfully. Jul 7 01:43:26.390367 containerd[1463]: time="2025-07-07T01:43:26.389041127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:26.392603 containerd[1463]: time="2025-07-07T01:43:26.392493510Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 7 01:43:26.396310 containerd[1463]: time="2025-07-07T01:43:26.394927456Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:26.398513 containerd[1463]: time="2025-07-07T01:43:26.398479707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:26.400363 containerd[1463]: time="2025-07-07T01:43:26.400333202Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.740417073s" Jul 7 01:43:26.400485 containerd[1463]: time="2025-07-07T01:43:26.400466202Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 01:43:26.412046 containerd[1463]: time="2025-07-07T01:43:26.411990274Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 01:43:26.749398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 01:43:26.761749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:27.123595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:27.124798 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:43:27.529135 kubelet[1942]: E0707 01:43:27.528911 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:43:27.534756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:43:27.535349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:43:28.451204 containerd[1463]: time="2025-07-07T01:43:28.451038576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:28.454029 containerd[1463]: time="2025-07-07T01:43:28.453976910Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 7 01:43:28.456235 containerd[1463]: time="2025-07-07T01:43:28.455680071Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:28.460262 containerd[1463]: time="2025-07-07T01:43:28.460222581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:28.461427 containerd[1463]: time="2025-07-07T01:43:28.461396679Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.04935623s" Jul 7 01:43:28.461574 containerd[1463]: time="2025-07-07T01:43:28.461554054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 01:43:28.462306 containerd[1463]: time="2025-07-07T01:43:28.462263688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 01:43:30.186948 containerd[1463]: time="2025-07-07T01:43:30.186789824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:30.188728 containerd[1463]: time="2025-07-07T01:43:30.188659809Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 7 01:43:30.190316 containerd[1463]: time="2025-07-07T01:43:30.190249175Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:30.193852 containerd[1463]: time="2025-07-07T01:43:30.193795971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:30.195233 containerd[1463]: time="2025-07-07T01:43:30.195077039Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.732666425s" Jul 7 01:43:30.195233 containerd[1463]: time="2025-07-07T01:43:30.195119318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 01:43:30.196896 containerd[1463]: time="2025-07-07T01:43:30.196737400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 01:43:31.670727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526836764.mount: Deactivated successfully. Jul 7 01:43:32.278775 containerd[1463]: time="2025-07-07T01:43:32.278702652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:32.280320 containerd[1463]: time="2025-07-07T01:43:32.280182983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 7 01:43:32.281962 containerd[1463]: time="2025-07-07T01:43:32.281911591Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:32.284444 containerd[1463]: time="2025-07-07T01:43:32.284418933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:32.285454 containerd[1463]: time="2025-07-07T01:43:32.285142361Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.088323488s" Jul 7 01:43:32.285454 containerd[1463]: time="2025-07-07T01:43:32.285192727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 01:43:32.285817 containerd[1463]: time="2025-07-07T01:43:32.285789759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 01:43:33.019353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275379256.mount: Deactivated successfully. Jul 7 01:43:34.151613 containerd[1463]: time="2025-07-07T01:43:34.151498383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:34.153498 containerd[1463]: time="2025-07-07T01:43:34.153426806Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 01:43:34.155133 containerd[1463]: time="2025-07-07T01:43:34.155087776Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:34.597369 containerd[1463]: time="2025-07-07T01:43:34.596235582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:34.601112 containerd[1463]: time="2025-07-07T01:43:34.600735004Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.314866418s" Jul 7 01:43:34.601112 containerd[1463]: time="2025-07-07T01:43:34.600942975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 01:43:34.608320 containerd[1463]: time="2025-07-07T01:43:34.606977060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 01:43:35.247838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395730795.mount: Deactivated successfully. Jul 7 01:43:35.259075 containerd[1463]: time="2025-07-07T01:43:35.258942617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:35.261744 containerd[1463]: time="2025-07-07T01:43:35.261609247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 01:43:35.264055 containerd[1463]: time="2025-07-07T01:43:35.263918043Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:35.271755 containerd[1463]: time="2025-07-07T01:43:35.271633024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:35.274376 containerd[1463]: time="2025-07-07T01:43:35.273749680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 666.60721ms" Jul 7 01:43:35.274376 containerd[1463]: time="2025-07-07T01:43:35.273833958Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 01:43:35.276488 containerd[1463]: time="2025-07-07T01:43:35.275982333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 01:43:36.083089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236875059.mount: Deactivated successfully. Jul 7 01:43:36.458413 update_engine[1451]: I20250707 01:43:36.458014 1451 update_attempter.cc:509] Updating boot flags... Jul 7 01:43:36.546930 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2045) Jul 7 01:43:36.628668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2047) Jul 7 01:43:36.695527 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2047) Jul 7 01:43:37.749030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 01:43:37.761613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:38.046477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:38.053097 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:43:38.222207 kubelet[2097]: E0707 01:43:38.222145 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:43:38.225896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:43:38.226491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:43:39.701355 containerd[1463]: time="2025-07-07T01:43:39.699849252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:39.702581 containerd[1463]: time="2025-07-07T01:43:39.702464964Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 7 01:43:39.708482 containerd[1463]: time="2025-07-07T01:43:39.708384960Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:39.718661 containerd[1463]: time="2025-07-07T01:43:39.718583409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:43:39.722481 containerd[1463]: time="2025-07-07T01:43:39.722392150Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.446335037s" Jul 7 01:43:39.722619 containerd[1463]: time="2025-07-07T01:43:39.722482760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 01:43:43.955848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:43.980926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:44.064953 systemd[1]: Reloading requested from client PID 2134 ('systemctl') (unit session-11.scope)... Jul 7 01:43:44.064995 systemd[1]: Reloading... Jul 7 01:43:44.192867 zram_generator::config[2171]: No configuration found. Jul 7 01:43:44.359579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:43:44.456625 systemd[1]: Reloading finished in 391 ms. Jul 7 01:43:44.533366 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 01:43:44.533484 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 01:43:44.534050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:44.540098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:45.013591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:45.034966 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:43:45.145659 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:43:45.145659 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:43:45.145659 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:43:45.146817 kubelet[2237]: I0707 01:43:45.145749 2237 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:43:45.798907 kubelet[2237]: I0707 01:43:45.798829 2237 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 01:43:45.798907 kubelet[2237]: I0707 01:43:45.798864 2237 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:43:45.799281 kubelet[2237]: I0707 01:43:45.799194 2237 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 01:43:45.851810 kubelet[2237]: E0707 01:43:45.851097 2237 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:45.851810 kubelet[2237]: I0707 01:43:45.851354 2237 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:43:45.870387 kubelet[2237]: E0707 01:43:45.870181 2237 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:43:45.870387 kubelet[2237]: I0707 01:43:45.870248 2237 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:43:45.877112 kubelet[2237]: I0707 01:43:45.877044 2237 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:43:45.880765 kubelet[2237]: I0707 01:43:45.880684 2237 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:43:45.880996 kubelet[2237]: I0707 01:43:45.880726 2237 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-7-c803550fde.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:43:45.881684 kubelet[2237]: I0707 01:43:45.881034 2237 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:43:45.881684 kubelet[2237]: I0707 01:43:45.881048 2237 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 01:43:45.881684 kubelet[2237]: I0707 01:43:45.881546 2237 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:43:45.886719 kubelet[2237]: I0707 01:43:45.886594 2237 kubelet.go:446] "Attempting to sync node with API server" Jul 7 01:43:45.886950 kubelet[2237]: I0707 01:43:45.886795 2237 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:43:45.886950 kubelet[2237]: I0707 01:43:45.886827 2237 kubelet.go:352] "Adding apiserver pod source" Jul 7 01:43:45.886950 kubelet[2237]: I0707 01:43:45.886844 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:43:45.898394 kubelet[2237]: I0707 01:43:45.897020 2237 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:43:45.898712 kubelet[2237]: I0707 01:43:45.898674 2237 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:43:45.901844 kubelet[2237]: W0707 01:43:45.901797 2237 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 01:43:45.903448 kubelet[2237]: W0707 01:43:45.901815 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:45.903654 kubelet[2237]: E0707 01:43:45.903459 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:45.903654 kubelet[2237]: W0707 01:43:45.901952 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-c803550fde.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:45.903654 kubelet[2237]: E0707 01:43:45.903503 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-c803550fde.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:45.908726 kubelet[2237]: I0707 01:43:45.908684 2237 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:43:45.909111 kubelet[2237]: I0707 01:43:45.909074 2237 server.go:1287] "Started kubelet" Jul 7 01:43:45.913238 kubelet[2237]: I0707 01:43:45.913196 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:43:45.918394 kubelet[2237]: E0707 01:43:45.916176 2237 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.32:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-7-c803550fde.novalocal.184fd4ae7a23e60b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-7-c803550fde.novalocal,UID:ci-4081-3-4-7-c803550fde.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-7-c803550fde.novalocal,},FirstTimestamp:2025-07-07 01:43:45.908983307 +0000 UTC m=+0.854636451,LastTimestamp:2025-07-07 01:43:45.908983307 +0000 UTC m=+0.854636451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-7-c803550fde.novalocal,}" Jul 7 01:43:45.923375 kubelet[2237]: I0707 01:43:45.923280 2237 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:43:45.924506 kubelet[2237]: I0707 01:43:45.924465 2237 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:43:45.925076 kubelet[2237]: E0707 01:43:45.925008 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:45.927439 kubelet[2237]: I0707 01:43:45.926160 2237 server.go:479] "Adding debug handlers to kubelet server" Jul 7 01:43:45.927439 kubelet[2237]: I0707 01:43:45.926221 2237 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:43:45.927439 kubelet[2237]: I0707 01:43:45.926503 2237 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:43:45.928016 kubelet[2237]: E0707 01:43:45.927973 2237 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:43:45.928521 kubelet[2237]: E0707 01:43:45.928457 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-c803550fde.novalocal?timeout=10s\": dial tcp 172.24.4.32:6443: connect: connection refused" interval="200ms" Jul 7 01:43:45.932437 kubelet[2237]: W0707 01:43:45.928798 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:45.932862 kubelet[2237]: E0707 01:43:45.932788 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:45.933109 kubelet[2237]: I0707 01:43:45.930850 2237 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:43:45.933953 kubelet[2237]: I0707 01:43:45.930538 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:43:45.935010 kubelet[2237]: I0707 01:43:45.934935 2237 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:43:45.937207 kubelet[2237]: I0707 01:43:45.935094 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:43:45.937207 kubelet[2237]: I0707 01:43:45.936774 2237 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:43:45.939013 kubelet[2237]: I0707 01:43:45.938871 2237 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:43:45.977862 kubelet[2237]: I0707 01:43:45.976889 2237 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:43:45.977862 kubelet[2237]: I0707 01:43:45.976909 2237 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:43:45.977862 kubelet[2237]: I0707 01:43:45.976936 2237 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:43:45.982316 kubelet[2237]: I0707 01:43:45.982250 2237 policy_none.go:49] "None policy: Start" Jul 7 01:43:45.982316 kubelet[2237]: I0707 01:43:45.982317 2237 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:43:45.982440 kubelet[2237]: I0707 01:43:45.982364 2237 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:43:45.989271 kubelet[2237]: I0707 01:43:45.989167 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:43:45.993132 kubelet[2237]: I0707 01:43:45.993045 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:43:45.994442 kubelet[2237]: I0707 01:43:45.994425 2237 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 01:43:45.994584 kubelet[2237]: I0707 01:43:45.994570 2237 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:43:45.994677 kubelet[2237]: I0707 01:43:45.994667 2237 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 01:43:45.995836 kubelet[2237]: E0707 01:43:45.994817 2237 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:43:45.996818 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 01:43:46.000011 kubelet[2237]: W0707 01:43:45.998901 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:46.000011 kubelet[2237]: E0707 01:43:45.998946 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:46.015957 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 01:43:46.020411 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 01:43:46.025312 kubelet[2237]: E0707 01:43:46.025265 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:46.028770 kubelet[2237]: I0707 01:43:46.028494 2237 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:43:46.028856 kubelet[2237]: I0707 01:43:46.028829 2237 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:43:46.028934 kubelet[2237]: I0707 01:43:46.028855 2237 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:43:46.029276 kubelet[2237]: I0707 01:43:46.029237 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:43:46.031861 kubelet[2237]: E0707 01:43:46.031739 2237 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:43:46.032335 kubelet[2237]: E0707 01:43:46.032318 2237 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:46.120953 systemd[1]: Created slice kubepods-burstable-pod5b547f780fd5c63c3b9326ef1578e020.slice - libcontainer container kubepods-burstable-pod5b547f780fd5c63c3b9326ef1578e020.slice. Jul 7 01:43:46.128891 kubelet[2237]: I0707 01:43:46.128811 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129425 kubelet[2237]: I0707 01:43:46.128914 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129425 kubelet[2237]: I0707 01:43:46.128975 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129425 kubelet[2237]: I0707 01:43:46.129025 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129425 kubelet[2237]: I0707 01:43:46.129075 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129636 kubelet[2237]: I0707 01:43:46.129126 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129636 kubelet[2237]: I0707 01:43:46.129200 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/744f819765e9cc58844f42c35f1cd2d0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"744f819765e9cc58844f42c35f1cd2d0\") " pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129636 kubelet[2237]: I0707 01:43:46.129249 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.129636 kubelet[2237]: I0707 01:43:46.129341 2237 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.131533 kubelet[2237]: I0707 01:43:46.131479 2237 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.132680 kubelet[2237]: E0707 01:43:46.132573 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.32:6443/api/v1/nodes\": dial tcp 172.24.4.32:6443: connect: connection refused" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.133248 kubelet[2237]: E0707 01:43:46.133192 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-c803550fde.novalocal?timeout=10s\": dial tcp 172.24.4.32:6443: connect: connection refused" interval="400ms" Jul 7 01:43:46.136247 kubelet[2237]: E0707 01:43:46.135967 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.148609 systemd[1]: Created slice kubepods-burstable-pod9b5d63becdc5ff33f65f2045993e7459.slice - libcontainer container kubepods-burstable-pod9b5d63becdc5ff33f65f2045993e7459.slice. Jul 7 01:43:46.153783 kubelet[2237]: E0707 01:43:46.153498 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.160396 systemd[1]: Created slice kubepods-burstable-pod744f819765e9cc58844f42c35f1cd2d0.slice - libcontainer container kubepods-burstable-pod744f819765e9cc58844f42c35f1cd2d0.slice. Jul 7 01:43:46.164429 kubelet[2237]: E0707 01:43:46.164274 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.338777 kubelet[2237]: I0707 01:43:46.337966 2237 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.339223 kubelet[2237]: E0707 01:43:46.338997 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.32:6443/api/v1/nodes\": dial tcp 172.24.4.32:6443: connect: connection refused" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.440559 containerd[1463]: time="2025-07-07T01:43:46.439096234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal,Uid:5b547f780fd5c63c3b9326ef1578e020,Namespace:kube-system,Attempt:0,}" Jul 7 01:43:46.455453 containerd[1463]: time="2025-07-07T01:43:46.455263258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal,Uid:9b5d63becdc5ff33f65f2045993e7459,Namespace:kube-system,Attempt:0,}" Jul 7 01:43:46.467378 containerd[1463]: time="2025-07-07T01:43:46.466903225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal,Uid:744f819765e9cc58844f42c35f1cd2d0,Namespace:kube-system,Attempt:0,}" Jul 7 01:43:46.535112 kubelet[2237]: E0707 01:43:46.535005 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-c803550fde.novalocal?timeout=10s\": dial tcp 172.24.4.32:6443: connect: connection refused" interval="800ms" Jul 7 01:43:46.746332 kubelet[2237]: I0707 01:43:46.746049 2237 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.747221 kubelet[2237]: E0707 01:43:46.747137 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.32:6443/api/v1/nodes\": dial tcp 172.24.4.32:6443: connect: connection refused" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:46.796184 kubelet[2237]: W0707 01:43:46.795863 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:46.796184 kubelet[2237]: E0707 01:43:46.796171 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:46.913365 kubelet[2237]: W0707 01:43:46.913132 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:46.913365 kubelet[2237]: E0707 01:43:46.913250 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:46.985607 kubelet[2237]: W0707 01:43:46.985401 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:46.985607 kubelet[2237]: E0707 01:43:46.985537 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:47.109636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322610098.mount: Deactivated successfully. Jul 7 01:43:47.125230 containerd[1463]: time="2025-07-07T01:43:47.125036642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:43:47.128970 containerd[1463]: time="2025-07-07T01:43:47.128749881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:43:47.130592 containerd[1463]: time="2025-07-07T01:43:47.130499366Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:43:47.132679 containerd[1463]: time="2025-07-07T01:43:47.132571193Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:43:47.136132 containerd[1463]: time="2025-07-07T01:43:47.135901464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 01:43:47.138121 containerd[1463]: time="2025-07-07T01:43:47.138046028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 01:43:47.138677 containerd[1463]: time="2025-07-07T01:43:47.138489601Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:43:47.145982 containerd[1463]: time="2025-07-07T01:43:47.145862438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:43:47.151702 containerd[1463]: time="2025-07-07T01:43:47.150886478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.776044ms" Jul 7 01:43:47.159347 containerd[1463]: time="2025-07-07T01:43:47.159205150Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 703.694217ms" Jul 7 01:43:47.164744 containerd[1463]: time="2025-07-07T01:43:47.164692399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 725.131932ms" Jul 7 01:43:47.226117 kubelet[2237]: W0707 01:43:47.225983 2237 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-c803550fde.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.32:6443: connect: connection refused Jul 7 01:43:47.226117 kubelet[2237]: E0707 01:43:47.226069 2237 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-7-c803550fde.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:43:47.335954 kubelet[2237]: E0707 01:43:47.335805 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-7-c803550fde.novalocal?timeout=10s\": dial tcp 172.24.4.32:6443: connect: connection refused" interval="1.6s" Jul 7 01:43:47.460604 containerd[1463]: time="2025-07-07T01:43:47.459045609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:43:47.460604 containerd[1463]: time="2025-07-07T01:43:47.459337046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:43:47.460604 containerd[1463]: time="2025-07-07T01:43:47.459493079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.460604 containerd[1463]: time="2025-07-07T01:43:47.460434917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.490370 containerd[1463]: time="2025-07-07T01:43:47.489919714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:43:47.490370 containerd[1463]: time="2025-07-07T01:43:47.490043726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:43:47.490370 containerd[1463]: time="2025-07-07T01:43:47.490065527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.490370 containerd[1463]: time="2025-07-07T01:43:47.490199538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.491528 containerd[1463]: time="2025-07-07T01:43:47.491114456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:43:47.491675 containerd[1463]: time="2025-07-07T01:43:47.491616388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:43:47.493689 systemd[1]: Started cri-containerd-648f161f3a3bbbdb842552a0235acef725c6bf3a295423d4ff07b981edc0798d.scope - libcontainer container 648f161f3a3bbbdb842552a0235acef725c6bf3a295423d4ff07b981edc0798d. Jul 7 01:43:47.495652 containerd[1463]: time="2025-07-07T01:43:47.495344415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.495652 containerd[1463]: time="2025-07-07T01:43:47.495464269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:43:47.527518 systemd[1]: Started cri-containerd-de4b547f4b9b3bd4319d1467cf54412b4411558115643b930d17bda46e69cef0.scope - libcontainer container de4b547f4b9b3bd4319d1467cf54412b4411558115643b930d17bda46e69cef0. Jul 7 01:43:47.550455 systemd[1]: Started cri-containerd-83b686223ab4f9a96a32224522034acf280fc0f96fe89c3ee0a203959f8c35e6.scope - libcontainer container 83b686223ab4f9a96a32224522034acf280fc0f96fe89c3ee0a203959f8c35e6. Jul 7 01:43:47.555775 kubelet[2237]: I0707 01:43:47.555747 2237 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:47.557474 kubelet[2237]: E0707 01:43:47.557423 2237 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.32:6443/api/v1/nodes\": dial tcp 172.24.4.32:6443: connect: connection refused" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:47.608075 containerd[1463]: time="2025-07-07T01:43:47.608023961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal,Uid:9b5d63becdc5ff33f65f2045993e7459,Namespace:kube-system,Attempt:0,} returns sandbox id \"648f161f3a3bbbdb842552a0235acef725c6bf3a295423d4ff07b981edc0798d\"" Jul 7 01:43:47.613024 containerd[1463]: time="2025-07-07T01:43:47.612882560Z" level=info msg="CreateContainer within sandbox \"648f161f3a3bbbdb842552a0235acef725c6bf3a295423d4ff07b981edc0798d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 01:43:47.621860 containerd[1463]: time="2025-07-07T01:43:47.621785398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal,Uid:5b547f780fd5c63c3b9326ef1578e020,Namespace:kube-system,Attempt:0,} returns sandbox id \"de4b547f4b9b3bd4319d1467cf54412b4411558115643b930d17bda46e69cef0\"" Jul 7 01:43:47.628013 containerd[1463]: time="2025-07-07T01:43:47.627832317Z" level=info msg="CreateContainer within sandbox \"de4b547f4b9b3bd4319d1467cf54412b4411558115643b930d17bda46e69cef0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 01:43:47.644010 containerd[1463]: time="2025-07-07T01:43:47.643944747Z" level=info msg="CreateContainer within sandbox \"648f161f3a3bbbdb842552a0235acef725c6bf3a295423d4ff07b981edc0798d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9091a9fe41ba6933279b2d96bde0f2c696394873a0881ad7c06cea49b577b881\"" Jul 7 01:43:47.644266 containerd[1463]: time="2025-07-07T01:43:47.644236705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal,Uid:744f819765e9cc58844f42c35f1cd2d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"83b686223ab4f9a96a32224522034acf280fc0f96fe89c3ee0a203959f8c35e6\"" Jul 7 01:43:47.645201 containerd[1463]: time="2025-07-07T01:43:47.645096077Z" level=info msg="StartContainer for \"9091a9fe41ba6933279b2d96bde0f2c696394873a0881ad7c06cea49b577b881\"" Jul 7 01:43:47.653997 containerd[1463]: time="2025-07-07T01:43:47.653944735Z" level=info msg="CreateContainer within sandbox \"83b686223ab4f9a96a32224522034acf280fc0f96fe89c3ee0a203959f8c35e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 01:43:47.671850 containerd[1463]: time="2025-07-07T01:43:47.671777694Z" level=info msg="CreateContainer within sandbox \"de4b547f4b9b3bd4319d1467cf54412b4411558115643b930d17bda46e69cef0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"749a8d73e8e69a8b945d45948be4a23fc85ed94661d5248b56f20790c81c7de8\"" Jul 7 01:43:47.673327 containerd[1463]: time="2025-07-07T01:43:47.672625435Z" level=info msg="StartContainer for \"749a8d73e8e69a8b945d45948be4a23fc85ed94661d5248b56f20790c81c7de8\"" Jul 7 01:43:47.686958 systemd[1]: Started cri-containerd-9091a9fe41ba6933279b2d96bde0f2c696394873a0881ad7c06cea49b577b881.scope - libcontainer container 9091a9fe41ba6933279b2d96bde0f2c696394873a0881ad7c06cea49b577b881. Jul 7 01:43:47.694073 containerd[1463]: time="2025-07-07T01:43:47.693906776Z" level=info msg="CreateContainer within sandbox \"83b686223ab4f9a96a32224522034acf280fc0f96fe89c3ee0a203959f8c35e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36a797a317cbcf7d9e1a541f4ad9d223c4faf342c5bd956d548e1dfa269a2ac0\"" Jul 7 01:43:47.696320 containerd[1463]: time="2025-07-07T01:43:47.695740908Z" level=info msg="StartContainer for \"36a797a317cbcf7d9e1a541f4ad9d223c4faf342c5bd956d548e1dfa269a2ac0\"" Jul 7 01:43:47.735457 systemd[1]: Started cri-containerd-749a8d73e8e69a8b945d45948be4a23fc85ed94661d5248b56f20790c81c7de8.scope - libcontainer container 749a8d73e8e69a8b945d45948be4a23fc85ed94661d5248b56f20790c81c7de8. Jul 7 01:43:47.761609 systemd[1]: Started cri-containerd-36a797a317cbcf7d9e1a541f4ad9d223c4faf342c5bd956d548e1dfa269a2ac0.scope - libcontainer container 36a797a317cbcf7d9e1a541f4ad9d223c4faf342c5bd956d548e1dfa269a2ac0. Jul 7 01:43:47.774872 containerd[1463]: time="2025-07-07T01:43:47.774808011Z" level=info msg="StartContainer for \"9091a9fe41ba6933279b2d96bde0f2c696394873a0881ad7c06cea49b577b881\" returns successfully" Jul 7 01:43:47.825900 containerd[1463]: time="2025-07-07T01:43:47.825839586Z" level=info msg="StartContainer for \"749a8d73e8e69a8b945d45948be4a23fc85ed94661d5248b56f20790c81c7de8\" returns successfully" Jul 7 01:43:47.858338 containerd[1463]: time="2025-07-07T01:43:47.858242019Z" level=info msg="StartContainer for \"36a797a317cbcf7d9e1a541f4ad9d223c4faf342c5bd956d548e1dfa269a2ac0\" returns successfully" Jul 7 01:43:48.011899 kubelet[2237]: E0707 01:43:48.011666 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:48.013234 kubelet[2237]: E0707 01:43:48.013204 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:48.016905 kubelet[2237]: E0707 01:43:48.016786 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.019878 kubelet[2237]: E0707 01:43:49.019839 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.020456 kubelet[2237]: E0707 01:43:49.020215 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.163746 kubelet[2237]: I0707 01:43:49.162690 2237 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.739562 kubelet[2237]: E0707 01:43:49.739495 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.830691 kubelet[2237]: I0707 01:43:49.829579 2237 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.830691 kubelet[2237]: E0707 01:43:49.829624 2237 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-7-c803550fde.novalocal\": node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:49.858327 kubelet[2237]: E0707 01:43:49.858003 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:49.946416 kubelet[2237]: E0707 01:43:49.945751 2237 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:49.958391 kubelet[2237]: E0707 01:43:49.958338 2237 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:50.026418 kubelet[2237]: I0707 01:43:50.026348 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.044521 kubelet[2237]: E0707 01:43:50.044444 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.044521 kubelet[2237]: I0707 01:43:50.044484 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.049643 kubelet[2237]: E0707 01:43:50.047672 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.049643 kubelet[2237]: I0707 01:43:50.047736 2237 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.051468 kubelet[2237]: E0707 01:43:50.051405 2237 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:50.905111 kubelet[2237]: I0707 01:43:50.904473 2237 apiserver.go:52] "Watching apiserver" Jul 7 01:43:50.927433 kubelet[2237]: I0707 01:43:50.927319 2237 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:43:52.793222 systemd[1]: Reloading requested from client PID 2512 ('systemctl') (unit session-11.scope)... Jul 7 01:43:52.793372 systemd[1]: Reloading... Jul 7 01:43:52.961405 zram_generator::config[2549]: No configuration found. Jul 7 01:43:53.134927 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:43:53.242586 systemd[1]: Reloading finished in 448 ms. Jul 7 01:43:53.296627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:53.312395 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 01:43:53.312650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:53.312756 systemd[1]: kubelet.service: Consumed 1.574s CPU time, 134.0M memory peak, 0B memory swap peak. Jul 7 01:43:53.319612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:43:53.639703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:43:53.656751 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:43:53.722489 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:43:53.722489 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:43:53.722489 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:43:53.723419 kubelet[2614]: I0707 01:43:53.722589 2614 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:43:53.732671 kubelet[2614]: I0707 01:43:53.732593 2614 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 01:43:53.732671 kubelet[2614]: I0707 01:43:53.732632 2614 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:43:53.737346 kubelet[2614]: I0707 01:43:53.735509 2614 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 01:43:53.738778 kubelet[2614]: I0707 01:43:53.738741 2614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 01:43:53.743957 kubelet[2614]: I0707 01:43:53.743869 2614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:43:53.754365 kubelet[2614]: E0707 01:43:53.752503 2614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 01:43:53.754365 kubelet[2614]: I0707 01:43:53.752571 2614 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 01:43:53.756794 kubelet[2614]: I0707 01:43:53.756688 2614 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:43:53.757174 kubelet[2614]: I0707 01:43:53.757089 2614 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:43:53.757461 kubelet[2614]: I0707 01:43:53.757133 2614 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-7-c803550fde.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:43:53.757461 kubelet[2614]: I0707 01:43:53.757465 2614 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:43:53.758045 kubelet[2614]: I0707 01:43:53.757479 2614 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 01:43:53.758045 kubelet[2614]: I0707 01:43:53.757579 2614 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:43:53.758045 kubelet[2614]: I0707 01:43:53.757767 2614 kubelet.go:446] "Attempting to sync node with API server" Jul 7 01:43:53.758746 kubelet[2614]: I0707 01:43:53.758549 2614 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:43:53.758746 kubelet[2614]: I0707 01:43:53.758581 2614 kubelet.go:352] "Adding apiserver pod source" Jul 7 01:43:53.758746 kubelet[2614]: I0707 01:43:53.758603 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:43:53.762395 kubelet[2614]: I0707 01:43:53.762350 2614 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 01:43:53.763548 kubelet[2614]: I0707 01:43:53.763510 2614 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:43:53.765971 kubelet[2614]: I0707 01:43:53.765440 2614 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:43:53.765971 kubelet[2614]: I0707 01:43:53.765484 2614 server.go:1287] "Started kubelet" Jul 7 01:43:53.772996 kubelet[2614]: I0707 01:43:53.772731 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:43:53.784980 kubelet[2614]: I0707 01:43:53.784902 2614 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:43:53.785997 kubelet[2614]: I0707 01:43:53.785928 2614 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:43:53.787348 kubelet[2614]: E0707 01:43:53.786182 2614 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-7-c803550fde.novalocal\" not found" Jul 7 01:43:53.787348 kubelet[2614]: I0707 01:43:53.786830 2614 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:43:53.787348 kubelet[2614]: I0707 01:43:53.787034 2614 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:43:53.787900 kubelet[2614]: I0707 01:43:53.787816 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:43:53.788255 kubelet[2614]: I0707 01:43:53.788217 2614 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:43:53.789085 kubelet[2614]: I0707 01:43:53.789031 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:43:53.795298 kubelet[2614]: I0707 01:43:53.793877 2614 server.go:479] "Adding debug handlers to kubelet server" Jul 7 01:43:53.802310 kubelet[2614]: I0707 01:43:53.802170 2614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:43:53.803679 kubelet[2614]: I0707 01:43:53.803549 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:43:53.819391 kubelet[2614]: I0707 01:43:53.819255 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:43:53.819391 kubelet[2614]: I0707 01:43:53.819337 2614 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 01:43:53.819391 kubelet[2614]: I0707 01:43:53.819366 2614 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:43:53.819391 kubelet[2614]: I0707 01:43:53.819374 2614 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 01:43:53.819720 kubelet[2614]: E0707 01:43:53.819429 2614 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:43:53.833326 kubelet[2614]: I0707 01:43:53.829550 2614 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:43:53.833326 kubelet[2614]: I0707 01:43:53.829571 2614 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:43:53.846564 kubelet[2614]: E0707 01:43:53.846513 2614 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:43:53.910708 kubelet[2614]: I0707 01:43:53.910576 2614 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:43:53.910708 kubelet[2614]: I0707 01:43:53.910598 2614 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:43:53.910708 kubelet[2614]: I0707 01:43:53.910648 2614 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:43:53.910915 kubelet[2614]: I0707 01:43:53.910842 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 01:43:53.910915 kubelet[2614]: I0707 01:43:53.910860 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 01:43:53.910915 kubelet[2614]: I0707 01:43:53.910887 2614 policy_none.go:49] "None policy: Start" Jul 7 01:43:53.910915 kubelet[2614]: I0707 01:43:53.910913 2614 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:43:53.911016 kubelet[2614]: I0707 01:43:53.910932 2614 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:43:53.911084 kubelet[2614]: I0707 01:43:53.911063 2614 state_mem.go:75] "Updated machine memory state" Jul 7 01:43:53.919768 kubelet[2614]: E0707 01:43:53.919679 2614 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 01:43:53.922811 kubelet[2614]: I0707 01:43:53.922768 2614 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:43:53.923166 kubelet[2614]: I0707 01:43:53.923018 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:43:53.923166 kubelet[2614]: I0707 01:43:53.923050 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:43:53.928531 kubelet[2614]: I0707 01:43:53.928476 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:43:53.937246 kubelet[2614]: E0707 01:43:53.936997 2614 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:43:54.054132 kubelet[2614]: I0707 01:43:54.054079 2614 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.077873 kubelet[2614]: I0707 01:43:54.077462 2614 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.077873 kubelet[2614]: I0707 01:43:54.077576 2614 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.122045 kubelet[2614]: I0707 01:43:54.121986 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.123764 kubelet[2614]: I0707 01:43:54.123716 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.124495 kubelet[2614]: I0707 01:43:54.124347 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.138360 kubelet[2614]: W0707 01:43:54.137755 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.147503 kubelet[2614]: W0707 01:43:54.147093 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.147503 kubelet[2614]: W0707 01:43:54.147169 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.190740 kubelet[2614]: I0707 01:43:54.190576 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.190740 kubelet[2614]: I0707 01:43:54.190669 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.190943 kubelet[2614]: I0707 01:43:54.190738 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.190943 kubelet[2614]: I0707 01:43:54.190793 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.190943 kubelet[2614]: I0707 01:43:54.190846 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.190943 kubelet[2614]: I0707 01:43:54.190890 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b547f780fd5c63c3b9326ef1578e020-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"5b547f780fd5c63c3b9326ef1578e020\") " pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.191124 kubelet[2614]: I0707 01:43:54.190938 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.191124 kubelet[2614]: I0707 01:43:54.190987 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b5d63becdc5ff33f65f2045993e7459-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"9b5d63becdc5ff33f65f2045993e7459\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.191124 kubelet[2614]: I0707 01:43:54.191034 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/744f819765e9cc58844f42c35f1cd2d0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal\" (UID: \"744f819765e9cc58844f42c35f1cd2d0\") " pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.759600 kubelet[2614]: I0707 01:43:54.759518 2614 apiserver.go:52] "Watching apiserver" Jul 7 01:43:54.787892 kubelet[2614]: I0707 01:43:54.787765 2614 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:43:54.874396 kubelet[2614]: I0707 01:43:54.871105 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.875719 kubelet[2614]: I0707 01:43:54.875657 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.876545 kubelet[2614]: I0707 01:43:54.876491 2614 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.909284 kubelet[2614]: W0707 01:43:54.909061 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.909284 kubelet[2614]: E0707 01:43:54.909235 2614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.923736 kubelet[2614]: W0707 01:43:54.920272 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.923736 kubelet[2614]: W0707 01:43:54.920446 2614 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:43:54.923736 kubelet[2614]: E0707 01:43:54.920696 2614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.923736 kubelet[2614]: E0707 01:43:54.920600 2614 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:43:54.951117 kubelet[2614]: I0707 01:43:54.950868 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-7-c803550fde.novalocal" podStartSLOduration=0.950761347 podStartE2EDuration="950.761347ms" podCreationTimestamp="2025-07-07 01:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:43:54.947976141 +0000 UTC m=+1.284967475" watchObservedRunningTime="2025-07-07 01:43:54.950761347 +0000 UTC m=+1.287752631" Jul 7 01:43:54.967371 kubelet[2614]: I0707 01:43:54.967225 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-7-c803550fde.novalocal" podStartSLOduration=0.967196515 podStartE2EDuration="967.196515ms" podCreationTimestamp="2025-07-07 01:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:43:54.965912926 +0000 UTC m=+1.302904160" watchObservedRunningTime="2025-07-07 01:43:54.967196515 +0000 UTC m=+1.304187759" Jul 7 01:43:58.835459 kubelet[2614]: I0707 01:43:58.834371 2614 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 01:43:58.836621 kubelet[2614]: I0707 01:43:58.836182 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 01:43:58.836687 containerd[1463]: time="2025-07-07T01:43:58.835858288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 01:43:59.458897 kubelet[2614]: I0707 01:43:59.458734 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-7-c803550fde.novalocal" podStartSLOduration=5.458700937 podStartE2EDuration="5.458700937s" podCreationTimestamp="2025-07-07 01:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:43:54.998881386 +0000 UTC m=+1.335872630" watchObservedRunningTime="2025-07-07 01:43:59.458700937 +0000 UTC m=+5.795692252" Jul 7 01:43:59.593657 systemd[1]: Created slice kubepods-besteffort-pode0550f98_d8bb_4e24_8c68_03e5109e1d71.slice - libcontainer container kubepods-besteffort-pode0550f98_d8bb_4e24_8c68_03e5109e1d71.slice. Jul 7 01:43:59.629308 kubelet[2614]: I0707 01:43:59.629223 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvr6z\" (UniqueName: \"kubernetes.io/projected/e0550f98-d8bb-4e24-8c68-03e5109e1d71-kube-api-access-vvr6z\") pod \"kube-proxy-hknsv\" (UID: \"e0550f98-d8bb-4e24-8c68-03e5109e1d71\") " pod="kube-system/kube-proxy-hknsv" Jul 7 01:43:59.630054 kubelet[2614]: I0707 01:43:59.630020 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0550f98-d8bb-4e24-8c68-03e5109e1d71-kube-proxy\") pod \"kube-proxy-hknsv\" (UID: \"e0550f98-d8bb-4e24-8c68-03e5109e1d71\") " pod="kube-system/kube-proxy-hknsv" Jul 7 01:43:59.630203 kubelet[2614]: I0707 01:43:59.630178 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0550f98-d8bb-4e24-8c68-03e5109e1d71-xtables-lock\") pod \"kube-proxy-hknsv\" (UID: \"e0550f98-d8bb-4e24-8c68-03e5109e1d71\") " pod="kube-system/kube-proxy-hknsv" Jul 7 01:43:59.630254 kubelet[2614]: I0707 01:43:59.630215 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0550f98-d8bb-4e24-8c68-03e5109e1d71-lib-modules\") pod \"kube-proxy-hknsv\" (UID: \"e0550f98-d8bb-4e24-8c68-03e5109e1d71\") " pod="kube-system/kube-proxy-hknsv" Jul 7 01:43:59.910360 containerd[1463]: time="2025-07-07T01:43:59.908463395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hknsv,Uid:e0550f98-d8bb-4e24-8c68-03e5109e1d71,Namespace:kube-system,Attempt:0,}" Jul 7 01:44:00.018225 containerd[1463]: time="2025-07-07T01:44:00.017731372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:00.018225 containerd[1463]: time="2025-07-07T01:44:00.017851477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:00.018225 containerd[1463]: time="2025-07-07T01:44:00.017872306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:00.018225 containerd[1463]: time="2025-07-07T01:44:00.018006348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:00.021349 systemd[1]: Created slice kubepods-besteffort-pod14a006f2_8f19_4fdc_a06e_aad457fc5e4c.slice - libcontainer container kubepods-besteffort-pod14a006f2_8f19_4fdc_a06e_aad457fc5e4c.slice. Jul 7 01:44:00.033607 kubelet[2614]: I0707 01:44:00.033549 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcb2s\" (UniqueName: \"kubernetes.io/projected/14a006f2-8f19-4fdc-a06e-aad457fc5e4c-kube-api-access-lcb2s\") pod \"tigera-operator-747864d56d-cdz82\" (UID: \"14a006f2-8f19-4fdc-a06e-aad457fc5e4c\") " pod="tigera-operator/tigera-operator-747864d56d-cdz82" Jul 7 01:44:00.033607 kubelet[2614]: I0707 01:44:00.033602 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14a006f2-8f19-4fdc-a06e-aad457fc5e4c-var-lib-calico\") pod \"tigera-operator-747864d56d-cdz82\" (UID: \"14a006f2-8f19-4fdc-a06e-aad457fc5e4c\") " pod="tigera-operator/tigera-operator-747864d56d-cdz82" Jul 7 01:44:00.063550 systemd[1]: Started cri-containerd-dfae3c87f6e8dbc34e5ce4560c88475f93b82f221ef636b7941e2ab3dac6ab2d.scope - libcontainer container dfae3c87f6e8dbc34e5ce4560c88475f93b82f221ef636b7941e2ab3dac6ab2d. Jul 7 01:44:00.091485 containerd[1463]: time="2025-07-07T01:44:00.091431788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hknsv,Uid:e0550f98-d8bb-4e24-8c68-03e5109e1d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfae3c87f6e8dbc34e5ce4560c88475f93b82f221ef636b7941e2ab3dac6ab2d\"" Jul 7 01:44:00.098337 containerd[1463]: time="2025-07-07T01:44:00.098222228Z" level=info msg="CreateContainer within sandbox \"dfae3c87f6e8dbc34e5ce4560c88475f93b82f221ef636b7941e2ab3dac6ab2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 01:44:00.127458 containerd[1463]: time="2025-07-07T01:44:00.127387374Z" level=info msg="CreateContainer within sandbox \"dfae3c87f6e8dbc34e5ce4560c88475f93b82f221ef636b7941e2ab3dac6ab2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c71988c7432d0978497dc390d91a22341c55bda91402064269e0bef60423d79\"" Jul 7 01:44:00.128543 containerd[1463]: time="2025-07-07T01:44:00.128512594Z" level=info msg="StartContainer for \"3c71988c7432d0978497dc390d91a22341c55bda91402064269e0bef60423d79\"" Jul 7 01:44:00.173975 systemd[1]: Started cri-containerd-3c71988c7432d0978497dc390d91a22341c55bda91402064269e0bef60423d79.scope - libcontainer container 3c71988c7432d0978497dc390d91a22341c55bda91402064269e0bef60423d79. Jul 7 01:44:00.217826 containerd[1463]: time="2025-07-07T01:44:00.217746754Z" level=info msg="StartContainer for \"3c71988c7432d0978497dc390d91a22341c55bda91402064269e0bef60423d79\" returns successfully" Jul 7 01:44:00.330480 containerd[1463]: time="2025-07-07T01:44:00.330372606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cdz82,Uid:14a006f2-8f19-4fdc-a06e-aad457fc5e4c,Namespace:tigera-operator,Attempt:0,}" Jul 7 01:44:00.370316 containerd[1463]: time="2025-07-07T01:44:00.366690210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:00.370316 containerd[1463]: time="2025-07-07T01:44:00.366802310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:00.370316 containerd[1463]: time="2025-07-07T01:44:00.366845561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:00.370316 containerd[1463]: time="2025-07-07T01:44:00.366968041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:00.405520 systemd[1]: Started cri-containerd-70c2beb29b80fa09fdc2e98adcb166441b92334a266089ae117ed55cc4807566.scope - libcontainer container 70c2beb29b80fa09fdc2e98adcb166441b92334a266089ae117ed55cc4807566. Jul 7 01:44:00.450458 containerd[1463]: time="2025-07-07T01:44:00.449981580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-cdz82,Uid:14a006f2-8f19-4fdc-a06e-aad457fc5e4c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70c2beb29b80fa09fdc2e98adcb166441b92334a266089ae117ed55cc4807566\"" Jul 7 01:44:00.453154 containerd[1463]: time="2025-07-07T01:44:00.453123675Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 01:44:00.769722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073139566.mount: Deactivated successfully. Jul 7 01:44:00.960044 kubelet[2614]: I0707 01:44:00.959842 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hknsv" podStartSLOduration=1.959803105 podStartE2EDuration="1.959803105s" podCreationTimestamp="2025-07-07 01:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:44:00.958972026 +0000 UTC m=+7.295963310" watchObservedRunningTime="2025-07-07 01:44:00.959803105 +0000 UTC m=+7.296794390" Jul 7 01:44:02.314336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541251372.mount: Deactivated successfully. Jul 7 01:44:03.043302 containerd[1463]: time="2025-07-07T01:44:03.043227226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:03.044724 containerd[1463]: time="2025-07-07T01:44:03.044584903Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 01:44:03.046074 containerd[1463]: time="2025-07-07T01:44:03.045995480Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:03.049084 containerd[1463]: time="2025-07-07T01:44:03.049034951Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:03.050544 containerd[1463]: time="2025-07-07T01:44:03.049919420Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.596758705s" Jul 7 01:44:03.050544 containerd[1463]: time="2025-07-07T01:44:03.049981016Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 01:44:03.053434 containerd[1463]: time="2025-07-07T01:44:03.053387917Z" level=info msg="CreateContainer within sandbox \"70c2beb29b80fa09fdc2e98adcb166441b92334a266089ae117ed55cc4807566\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 01:44:03.077075 containerd[1463]: time="2025-07-07T01:44:03.077013698Z" level=info msg="CreateContainer within sandbox \"70c2beb29b80fa09fdc2e98adcb166441b92334a266089ae117ed55cc4807566\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9ba1c44c623fa290b451820d4d0908b9344cfee280f771fda068e258bb80aec5\"" Jul 7 01:44:03.078503 containerd[1463]: time="2025-07-07T01:44:03.077485493Z" level=info msg="StartContainer for \"9ba1c44c623fa290b451820d4d0908b9344cfee280f771fda068e258bb80aec5\"" Jul 7 01:44:03.109498 systemd[1]: Started cri-containerd-9ba1c44c623fa290b451820d4d0908b9344cfee280f771fda068e258bb80aec5.scope - libcontainer container 9ba1c44c623fa290b451820d4d0908b9344cfee280f771fda068e258bb80aec5. Jul 7 01:44:03.139313 containerd[1463]: time="2025-07-07T01:44:03.139247457Z" level=info msg="StartContainer for \"9ba1c44c623fa290b451820d4d0908b9344cfee280f771fda068e258bb80aec5\" returns successfully" Jul 7 01:44:04.593597 kubelet[2614]: I0707 01:44:04.593514 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-cdz82" podStartSLOduration=2.994429496 podStartE2EDuration="5.593487486s" podCreationTimestamp="2025-07-07 01:43:59 +0000 UTC" firstStartedPulling="2025-07-07 01:44:00.452629467 +0000 UTC m=+6.789620701" lastFinishedPulling="2025-07-07 01:44:03.051687457 +0000 UTC m=+9.388678691" observedRunningTime="2025-07-07 01:44:03.951454505 +0000 UTC m=+10.288445829" watchObservedRunningTime="2025-07-07 01:44:04.593487486 +0000 UTC m=+10.930478730" Jul 7 01:44:08.571872 sudo[1722]: pam_unix(sudo:session): session closed for user root Jul 7 01:44:08.733152 sshd[1719]: pam_unix(sshd:session): session closed for user core Jul 7 01:44:08.751258 systemd[1]: sshd@8-172.24.4.32:22-172.24.4.1:35638.service: Deactivated successfully. Jul 7 01:44:08.765628 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 01:44:08.768375 systemd[1]: session-11.scope: Consumed 8.005s CPU time, 157.0M memory peak, 0B memory swap peak. Jul 7 01:44:08.770092 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jul 7 01:44:08.775850 systemd-logind[1449]: Removed session 11. Jul 7 01:44:12.963791 systemd[1]: Created slice kubepods-besteffort-pod0f0c8155_52f4_45a5_a5ec_d084783e61eb.slice - libcontainer container kubepods-besteffort-pod0f0c8155_52f4_45a5_a5ec_d084783e61eb.slice. Jul 7 01:44:13.030471 kubelet[2614]: I0707 01:44:13.030387 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px7dw\" (UniqueName: \"kubernetes.io/projected/0f0c8155-52f4-45a5-a5ec-d084783e61eb-kube-api-access-px7dw\") pod \"calico-typha-5db5b8c698-t28fw\" (UID: \"0f0c8155-52f4-45a5-a5ec-d084783e61eb\") " pod="calico-system/calico-typha-5db5b8c698-t28fw" Jul 7 01:44:13.030471 kubelet[2614]: I0707 01:44:13.030473 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f0c8155-52f4-45a5-a5ec-d084783e61eb-typha-certs\") pod \"calico-typha-5db5b8c698-t28fw\" (UID: \"0f0c8155-52f4-45a5-a5ec-d084783e61eb\") " pod="calico-system/calico-typha-5db5b8c698-t28fw" Jul 7 01:44:13.032627 kubelet[2614]: I0707 01:44:13.030509 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f0c8155-52f4-45a5-a5ec-d084783e61eb-tigera-ca-bundle\") pod \"calico-typha-5db5b8c698-t28fw\" (UID: \"0f0c8155-52f4-45a5-a5ec-d084783e61eb\") " pod="calico-system/calico-typha-5db5b8c698-t28fw" Jul 7 01:44:13.273033 containerd[1463]: time="2025-07-07T01:44:13.272897608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db5b8c698-t28fw,Uid:0f0c8155-52f4-45a5-a5ec-d084783e61eb,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:13.286915 systemd[1]: Created slice kubepods-besteffort-pod95af676c_45e7_4111_83a2_4b71bd32b6ab.slice - libcontainer container kubepods-besteffort-pod95af676c_45e7_4111_83a2_4b71bd32b6ab.slice. Jul 7 01:44:13.335346 kubelet[2614]: I0707 01:44:13.335262 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-var-lib-calico\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335346 kubelet[2614]: I0707 01:44:13.335344 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-var-run-calico\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335561 kubelet[2614]: I0707 01:44:13.335369 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-xtables-lock\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335561 kubelet[2614]: I0707 01:44:13.335393 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95af676c-45e7-4111-83a2-4b71bd32b6ab-tigera-ca-bundle\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335561 kubelet[2614]: I0707 01:44:13.335429 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-flexvol-driver-host\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335561 kubelet[2614]: I0707 01:44:13.335464 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhcc\" (UniqueName: \"kubernetes.io/projected/95af676c-45e7-4111-83a2-4b71bd32b6ab-kube-api-access-njhcc\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335561 kubelet[2614]: I0707 01:44:13.335487 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-cni-bin-dir\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335751 kubelet[2614]: I0707 01:44:13.335511 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-cni-net-dir\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335751 kubelet[2614]: I0707 01:44:13.335542 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95af676c-45e7-4111-83a2-4b71bd32b6ab-node-certs\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335751 kubelet[2614]: I0707 01:44:13.335562 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-cni-log-dir\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335751 kubelet[2614]: I0707 01:44:13.335580 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-lib-modules\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.335751 kubelet[2614]: I0707 01:44:13.335598 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95af676c-45e7-4111-83a2-4b71bd32b6ab-policysync\") pod \"calico-node-t72qh\" (UID: \"95af676c-45e7-4111-83a2-4b71bd32b6ab\") " pod="calico-system/calico-node-t72qh" Jul 7 01:44:13.346437 containerd[1463]: time="2025-07-07T01:44:13.345657266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:13.346437 containerd[1463]: time="2025-07-07T01:44:13.345764547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:13.346437 containerd[1463]: time="2025-07-07T01:44:13.345789254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:13.347664 containerd[1463]: time="2025-07-07T01:44:13.347436815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:13.395538 systemd[1]: Started cri-containerd-068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58.scope - libcontainer container 068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58. Jul 7 01:44:13.435849 kubelet[2614]: E0707 01:44:13.435592 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:13.443568 kubelet[2614]: E0707 01:44:13.443202 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.443568 kubelet[2614]: W0707 01:44:13.443255 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.444126 kubelet[2614]: E0707 01:44:13.443979 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.448700 kubelet[2614]: E0707 01:44:13.448538 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.449037 kubelet[2614]: W0707 01:44:13.448787 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.449037 kubelet[2614]: E0707 01:44:13.448813 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.464140 kubelet[2614]: E0707 01:44:13.463836 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.464140 kubelet[2614]: W0707 01:44:13.463867 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.464140 kubelet[2614]: E0707 01:44:13.463887 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.517787 containerd[1463]: time="2025-07-07T01:44:13.517710464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db5b8c698-t28fw,Uid:0f0c8155-52f4-45a5-a5ec-d084783e61eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58\"" Jul 7 01:44:13.520067 kubelet[2614]: E0707 01:44:13.520041 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.523564 kubelet[2614]: W0707 01:44:13.523467 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.523789 kubelet[2614]: E0707 01:44:13.523694 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.524334 kubelet[2614]: E0707 01:44:13.524156 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.524334 kubelet[2614]: W0707 01:44:13.524171 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.524334 kubelet[2614]: E0707 01:44:13.524198 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.524689 containerd[1463]: time="2025-07-07T01:44:13.524652205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 01:44:13.525214 kubelet[2614]: E0707 01:44:13.524964 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.525214 kubelet[2614]: W0707 01:44:13.524978 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.525214 kubelet[2614]: E0707 01:44:13.524990 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.525697 kubelet[2614]: E0707 01:44:13.525673 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.526706 kubelet[2614]: W0707 01:44:13.525851 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.526706 kubelet[2614]: E0707 01:44:13.525868 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.528218 kubelet[2614]: E0707 01:44:13.527756 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.528218 kubelet[2614]: W0707 01:44:13.527795 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.528218 kubelet[2614]: E0707 01:44:13.527813 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.530584 kubelet[2614]: E0707 01:44:13.529417 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.530584 kubelet[2614]: W0707 01:44:13.529458 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.530584 kubelet[2614]: E0707 01:44:13.529474 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.530584 kubelet[2614]: E0707 01:44:13.530431 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.530584 kubelet[2614]: W0707 01:44:13.530446 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.530584 kubelet[2614]: E0707 01:44:13.530470 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.532947 kubelet[2614]: E0707 01:44:13.531336 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.532947 kubelet[2614]: W0707 01:44:13.531727 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.532947 kubelet[2614]: E0707 01:44:13.531740 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.533614 kubelet[2614]: E0707 01:44:13.533351 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.533614 kubelet[2614]: W0707 01:44:13.533380 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.533614 kubelet[2614]: E0707 01:44:13.533398 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.533953 kubelet[2614]: E0707 01:44:13.533684 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.533953 kubelet[2614]: W0707 01:44:13.533695 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.533953 kubelet[2614]: E0707 01:44:13.533706 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.534410 kubelet[2614]: E0707 01:44:13.534189 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.534410 kubelet[2614]: W0707 01:44:13.534203 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.534410 kubelet[2614]: E0707 01:44:13.534213 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.534762 kubelet[2614]: E0707 01:44:13.534700 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.534762 kubelet[2614]: W0707 01:44:13.534712 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.535584 kubelet[2614]: E0707 01:44:13.535026 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.537150 kubelet[2614]: E0707 01:44:13.537117 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.537301 kubelet[2614]: W0707 01:44:13.537261 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.537456 kubelet[2614]: E0707 01:44:13.537359 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.537769 kubelet[2614]: E0707 01:44:13.537721 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.537769 kubelet[2614]: W0707 01:44:13.537735 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.538106 kubelet[2614]: E0707 01:44:13.537747 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.538671 kubelet[2614]: E0707 01:44:13.538374 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.538671 kubelet[2614]: W0707 01:44:13.538387 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.538671 kubelet[2614]: E0707 01:44:13.538398 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.539090 kubelet[2614]: E0707 01:44:13.539053 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.539090 kubelet[2614]: W0707 01:44:13.539066 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.539369 kubelet[2614]: E0707 01:44:13.539207 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.539591 kubelet[2614]: E0707 01:44:13.539554 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.539591 kubelet[2614]: W0707 01:44:13.539566 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.539822 kubelet[2614]: E0707 01:44:13.539695 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.540146 kubelet[2614]: E0707 01:44:13.540027 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.540146 kubelet[2614]: W0707 01:44:13.540039 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.540146 kubelet[2614]: E0707 01:44:13.540049 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.540834 kubelet[2614]: E0707 01:44:13.540820 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.540974 kubelet[2614]: W0707 01:44:13.540912 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.540974 kubelet[2614]: E0707 01:44:13.540929 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.541618 kubelet[2614]: E0707 01:44:13.541333 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.541618 kubelet[2614]: W0707 01:44:13.541347 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.541618 kubelet[2614]: E0707 01:44:13.541358 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.542180 kubelet[2614]: E0707 01:44:13.542165 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.542901 kubelet[2614]: W0707 01:44:13.542738 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.542901 kubelet[2614]: E0707 01:44:13.542766 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.542901 kubelet[2614]: I0707 01:44:13.542805 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9914e783-0422-4ffd-98e7-e3799124405f-kubelet-dir\") pod \"csi-node-driver-lcqqq\" (UID: \"9914e783-0422-4ffd-98e7-e3799124405f\") " pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:13.543427 kubelet[2614]: E0707 01:44:13.543222 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.543427 kubelet[2614]: W0707 01:44:13.543240 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.543427 kubelet[2614]: E0707 01:44:13.543259 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.543427 kubelet[2614]: I0707 01:44:13.543280 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9914e783-0422-4ffd-98e7-e3799124405f-registration-dir\") pod \"csi-node-driver-lcqqq\" (UID: \"9914e783-0422-4ffd-98e7-e3799124405f\") " pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:13.543942 kubelet[2614]: E0707 01:44:13.543729 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.543942 kubelet[2614]: W0707 01:44:13.543745 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.543942 kubelet[2614]: E0707 01:44:13.543772 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.543942 kubelet[2614]: I0707 01:44:13.543794 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9914e783-0422-4ffd-98e7-e3799124405f-socket-dir\") pod \"csi-node-driver-lcqqq\" (UID: \"9914e783-0422-4ffd-98e7-e3799124405f\") " pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:13.544692 kubelet[2614]: E0707 01:44:13.544514 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.544692 kubelet[2614]: W0707 01:44:13.544530 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.545040 kubelet[2614]: E0707 01:44:13.544832 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.545111 kubelet[2614]: I0707 01:44:13.545066 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9914e783-0422-4ffd-98e7-e3799124405f-varrun\") pod \"csi-node-driver-lcqqq\" (UID: \"9914e783-0422-4ffd-98e7-e3799124405f\") " pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:13.545185 kubelet[2614]: E0707 01:44:13.545010 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.545221 kubelet[2614]: W0707 01:44:13.545191 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.545427 kubelet[2614]: E0707 01:44:13.545386 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.545986 kubelet[2614]: E0707 01:44:13.545962 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.546149 kubelet[2614]: W0707 01:44:13.545979 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.546149 kubelet[2614]: E0707 01:44:13.546035 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.546939 kubelet[2614]: E0707 01:44:13.546823 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.546939 kubelet[2614]: W0707 01:44:13.546838 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.547214 kubelet[2614]: E0707 01:44:13.547057 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.547388 kubelet[2614]: E0707 01:44:13.547324 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.547388 kubelet[2614]: W0707 01:44:13.547338 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.547580 kubelet[2614]: E0707 01:44:13.547492 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.547580 kubelet[2614]: I0707 01:44:13.547526 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbpl\" (UniqueName: \"kubernetes.io/projected/9914e783-0422-4ffd-98e7-e3799124405f-kube-api-access-5wbpl\") pod \"csi-node-driver-lcqqq\" (UID: \"9914e783-0422-4ffd-98e7-e3799124405f\") " pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:13.547900 kubelet[2614]: E0707 01:44:13.547831 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.547900 kubelet[2614]: W0707 01:44:13.547843 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.548232 kubelet[2614]: E0707 01:44:13.548072 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.548459 kubelet[2614]: E0707 01:44:13.548366 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.548459 kubelet[2614]: W0707 01:44:13.548380 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.548459 kubelet[2614]: E0707 01:44:13.548391 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.549615 kubelet[2614]: E0707 01:44:13.549345 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.549615 kubelet[2614]: W0707 01:44:13.549359 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.549615 kubelet[2614]: E0707 01:44:13.549378 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.550091 kubelet[2614]: E0707 01:44:13.549934 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.550091 kubelet[2614]: W0707 01:44:13.549946 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.550091 kubelet[2614]: E0707 01:44:13.549957 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.550609 kubelet[2614]: E0707 01:44:13.550515 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.550609 kubelet[2614]: W0707 01:44:13.550531 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.550609 kubelet[2614]: E0707 01:44:13.550551 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.551059 kubelet[2614]: E0707 01:44:13.550955 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.551059 kubelet[2614]: W0707 01:44:13.550967 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.551059 kubelet[2614]: E0707 01:44:13.550978 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.551530 kubelet[2614]: E0707 01:44:13.551470 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.551530 kubelet[2614]: W0707 01:44:13.551483 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.551530 kubelet[2614]: E0707 01:44:13.551493 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.596002 containerd[1463]: time="2025-07-07T01:44:13.595546524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t72qh,Uid:95af676c-45e7-4111-83a2-4b71bd32b6ab,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:13.651119 kubelet[2614]: E0707 01:44:13.651088 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.651554 kubelet[2614]: W0707 01:44:13.651390 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.651554 kubelet[2614]: E0707 01:44:13.651420 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.651954 kubelet[2614]: E0707 01:44:13.651861 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.651954 kubelet[2614]: W0707 01:44:13.651875 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.651954 kubelet[2614]: E0707 01:44:13.651896 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.652522 kubelet[2614]: E0707 01:44:13.652376 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.652522 kubelet[2614]: W0707 01:44:13.652390 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.652522 kubelet[2614]: E0707 01:44:13.652407 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.652945 kubelet[2614]: E0707 01:44:13.652843 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.652945 kubelet[2614]: W0707 01:44:13.652857 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.652945 kubelet[2614]: E0707 01:44:13.652879 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.653490 kubelet[2614]: E0707 01:44:13.653420 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.653490 kubelet[2614]: W0707 01:44:13.653434 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.653609 kubelet[2614]: E0707 01:44:13.653503 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.654015 kubelet[2614]: E0707 01:44:13.653858 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.654015 kubelet[2614]: W0707 01:44:13.653871 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.654015 kubelet[2614]: E0707 01:44:13.653985 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.654420 kubelet[2614]: E0707 01:44:13.654165 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.654420 kubelet[2614]: W0707 01:44:13.654176 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.654420 kubelet[2614]: E0707 01:44:13.654227 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.654694 kubelet[2614]: E0707 01:44:13.654625 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.654694 kubelet[2614]: W0707 01:44:13.654638 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.654834 kubelet[2614]: E0707 01:44:13.654688 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.655413 kubelet[2614]: E0707 01:44:13.655212 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.655413 kubelet[2614]: W0707 01:44:13.655226 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.655413 kubelet[2614]: E0707 01:44:13.655358 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.655685 kubelet[2614]: E0707 01:44:13.655670 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.655780 kubelet[2614]: W0707 01:44:13.655758 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.656025 kubelet[2614]: E0707 01:44:13.655905 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.656346 kubelet[2614]: E0707 01:44:13.656239 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.656346 kubelet[2614]: W0707 01:44:13.656252 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.656589 kubelet[2614]: E0707 01:44:13.656507 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.656800 kubelet[2614]: E0707 01:44:13.656785 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.656938 kubelet[2614]: W0707 01:44:13.656876 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.656999 kubelet[2614]: E0707 01:44:13.656955 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.657476 kubelet[2614]: E0707 01:44:13.657462 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.657626 kubelet[2614]: W0707 01:44:13.657563 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.657766 kubelet[2614]: E0707 01:44:13.657664 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.658013 kubelet[2614]: E0707 01:44:13.657981 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.658013 kubelet[2614]: W0707 01:44:13.657995 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.658313 kubelet[2614]: E0707 01:44:13.658264 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.658506 kubelet[2614]: E0707 01:44:13.658490 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.659357 kubelet[2614]: W0707 01:44:13.658620 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.659357 kubelet[2614]: E0707 01:44:13.658762 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.662341 kubelet[2614]: E0707 01:44:13.661682 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.662341 kubelet[2614]: W0707 01:44:13.661707 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.662578 kubelet[2614]: E0707 01:44:13.662520 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.663458 kubelet[2614]: E0707 01:44:13.663262 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.663458 kubelet[2614]: W0707 01:44:13.663277 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.663776 kubelet[2614]: E0707 01:44:13.663758 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.664762 kubelet[2614]: E0707 01:44:13.664418 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.664762 kubelet[2614]: W0707 01:44:13.664432 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.664762 kubelet[2614]: E0707 01:44:13.664505 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.665122 kubelet[2614]: E0707 01:44:13.665080 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.666793 kubelet[2614]: W0707 01:44:13.665252 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.666793 kubelet[2614]: E0707 01:44:13.666476 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.666793 kubelet[2614]: E0707 01:44:13.666563 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.666793 kubelet[2614]: W0707 01:44:13.666573 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.667276 kubelet[2614]: E0707 01:44:13.667030 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.668311 kubelet[2614]: E0707 01:44:13.667810 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.668311 kubelet[2614]: W0707 01:44:13.667825 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.668472 kubelet[2614]: E0707 01:44:13.668431 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.668910 kubelet[2614]: E0707 01:44:13.668694 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.668910 kubelet[2614]: W0707 01:44:13.668712 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.668910 kubelet[2614]: E0707 01:44:13.668774 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.670412 kubelet[2614]: E0707 01:44:13.670389 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.670810 kubelet[2614]: W0707 01:44:13.670542 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.670893 kubelet[2614]: E0707 01:44:13.670676 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.671206 kubelet[2614]: E0707 01:44:13.671190 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.671475 kubelet[2614]: W0707 01:44:13.671327 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.671475 kubelet[2614]: E0707 01:44:13.671389 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.671739 kubelet[2614]: E0707 01:44:13.671665 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.671739 kubelet[2614]: W0707 01:44:13.671679 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.671739 kubelet[2614]: E0707 01:44:13.671695 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.709002 containerd[1463]: time="2025-07-07T01:44:13.708885789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:13.709303 containerd[1463]: time="2025-07-07T01:44:13.709235144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:13.709466 containerd[1463]: time="2025-07-07T01:44:13.709436301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:13.712381 containerd[1463]: time="2025-07-07T01:44:13.711724905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:13.719572 kubelet[2614]: E0707 01:44:13.719461 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:13.719572 kubelet[2614]: W0707 01:44:13.719487 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:13.719572 kubelet[2614]: E0707 01:44:13.719508 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:13.744584 systemd[1]: Started cri-containerd-d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad.scope - libcontainer container d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad. Jul 7 01:44:13.800625 containerd[1463]: time="2025-07-07T01:44:13.800409304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t72qh,Uid:95af676c-45e7-4111-83a2-4b71bd32b6ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\"" Jul 7 01:44:14.162939 systemd[1]: run-containerd-runc-k8s.io-068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58-runc.Bdhmna.mount: Deactivated successfully. Jul 7 01:44:14.821459 kubelet[2614]: E0707 01:44:14.819903 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:15.625343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514842369.mount: Deactivated successfully. Jul 7 01:44:16.821534 kubelet[2614]: E0707 01:44:16.820888 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:17.093234 containerd[1463]: time="2025-07-07T01:44:17.092898521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:17.095189 containerd[1463]: time="2025-07-07T01:44:17.094932866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 01:44:17.096855 containerd[1463]: time="2025-07-07T01:44:17.096808705Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:17.102716 containerd[1463]: time="2025-07-07T01:44:17.102627690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:17.103961 containerd[1463]: time="2025-07-07T01:44:17.103920274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.578979448s" Jul 7 01:44:17.104015 containerd[1463]: time="2025-07-07T01:44:17.103974906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 01:44:17.108154 containerd[1463]: time="2025-07-07T01:44:17.106900464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 01:44:17.129846 containerd[1463]: time="2025-07-07T01:44:17.129768387Z" level=info msg="CreateContainer within sandbox \"068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 01:44:17.160961 containerd[1463]: time="2025-07-07T01:44:17.160891765Z" level=info msg="CreateContainer within sandbox \"068fefd4aac63ce73155154c2124f84f84106eb5fbb02f4229fbaf1045313b58\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7b4ff38115a52c33490aa0325a2a3e562726294735b7b0725cd5309d4000e41e\"" Jul 7 01:44:17.161941 containerd[1463]: time="2025-07-07T01:44:17.161912509Z" level=info msg="StartContainer for \"7b4ff38115a52c33490aa0325a2a3e562726294735b7b0725cd5309d4000e41e\"" Jul 7 01:44:17.203325 systemd[1]: Started cri-containerd-7b4ff38115a52c33490aa0325a2a3e562726294735b7b0725cd5309d4000e41e.scope - libcontainer container 7b4ff38115a52c33490aa0325a2a3e562726294735b7b0725cd5309d4000e41e. Jul 7 01:44:17.265039 containerd[1463]: time="2025-07-07T01:44:17.264971054Z" level=info msg="StartContainer for \"7b4ff38115a52c33490aa0325a2a3e562726294735b7b0725cd5309d4000e41e\" returns successfully" Jul 7 01:44:17.977137 kubelet[2614]: E0707 01:44:17.977069 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.977137 kubelet[2614]: W0707 01:44:17.977105 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.977137 kubelet[2614]: E0707 01:44:17.977154 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.977506 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.978439 kubelet[2614]: W0707 01:44:17.977520 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.977531 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.977788 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.978439 kubelet[2614]: W0707 01:44:17.977799 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.977809 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.978222 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.978439 kubelet[2614]: W0707 01:44:17.978233 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.978439 kubelet[2614]: E0707 01:44:17.978244 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978526 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.979146 kubelet[2614]: W0707 01:44:17.978550 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978562 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978733 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.979146 kubelet[2614]: W0707 01:44:17.978743 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978754 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978950 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.979146 kubelet[2614]: W0707 01:44:17.978960 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.979146 kubelet[2614]: E0707 01:44:17.978969 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.980777 kubelet[2614]: E0707 01:44:17.979183 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.980777 kubelet[2614]: W0707 01:44:17.979194 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.980777 kubelet[2614]: E0707 01:44:17.979205 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.981016 kubelet[2614]: E0707 01:44:17.981003 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.981111 kubelet[2614]: W0707 01:44:17.981097 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.981203 kubelet[2614]: E0707 01:44:17.981190 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.981615 kubelet[2614]: E0707 01:44:17.981602 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.981693 kubelet[2614]: W0707 01:44:17.981681 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.981785 kubelet[2614]: E0707 01:44:17.981771 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.982184 kubelet[2614]: E0707 01:44:17.982170 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.982340 kubelet[2614]: W0707 01:44:17.982257 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.982340 kubelet[2614]: E0707 01:44:17.982273 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.982705 kubelet[2614]: E0707 01:44:17.982693 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.982920 kubelet[2614]: W0707 01:44:17.982784 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.982920 kubelet[2614]: E0707 01:44:17.982801 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.983153 kubelet[2614]: E0707 01:44:17.983141 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.983279 kubelet[2614]: W0707 01:44:17.983222 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.983279 kubelet[2614]: E0707 01:44:17.983238 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.984246 kubelet[2614]: E0707 01:44:17.984180 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.984246 kubelet[2614]: W0707 01:44:17.984193 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.984246 kubelet[2614]: E0707 01:44:17.984203 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.984769 kubelet[2614]: E0707 01:44:17.984677 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.984769 kubelet[2614]: W0707 01:44:17.984691 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.984769 kubelet[2614]: E0707 01:44:17.984713 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.991551 kubelet[2614]: E0707 01:44:17.991450 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.991551 kubelet[2614]: W0707 01:44:17.991475 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.991551 kubelet[2614]: E0707 01:44:17.991493 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.994314 kubelet[2614]: I0707 01:44:17.993529 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5db5b8c698-t28fw" podStartSLOduration=2.411007739 podStartE2EDuration="5.993484026s" podCreationTimestamp="2025-07-07 01:44:12 +0000 UTC" firstStartedPulling="2025-07-07 01:44:13.523026214 +0000 UTC m=+19.860017448" lastFinishedPulling="2025-07-07 01:44:17.105502501 +0000 UTC m=+23.442493735" observedRunningTime="2025-07-07 01:44:17.992753977 +0000 UTC m=+24.329745241" watchObservedRunningTime="2025-07-07 01:44:17.993484026 +0000 UTC m=+24.330475300" Jul 7 01:44:17.995345 kubelet[2614]: E0707 01:44:17.994955 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.995345 kubelet[2614]: W0707 01:44:17.994971 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.995345 kubelet[2614]: E0707 01:44:17.994987 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.995983 kubelet[2614]: E0707 01:44:17.995619 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.995983 kubelet[2614]: W0707 01:44:17.995630 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.995983 kubelet[2614]: E0707 01:44:17.995641 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.997758 kubelet[2614]: E0707 01:44:17.997402 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.997758 kubelet[2614]: W0707 01:44:17.997429 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.997758 kubelet[2614]: E0707 01:44:17.997680 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.998132 kubelet[2614]: E0707 01:44:17.997902 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.998132 kubelet[2614]: W0707 01:44:17.998027 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:17.998554 kubelet[2614]: E0707 01:44:17.998410 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:17.998924 kubelet[2614]: E0707 01:44:17.998910 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:17.999219 kubelet[2614]: W0707 01:44:17.998993 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.000441 kubelet[2614]: E0707 01:44:17.999962 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.000441 kubelet[2614]: E0707 01:44:18.000035 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.000441 kubelet[2614]: W0707 01:44:18.000410 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.001800 kubelet[2614]: E0707 01:44:18.001628 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.001958 kubelet[2614]: E0707 01:44:18.001884 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.001958 kubelet[2614]: W0707 01:44:18.001899 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.002050 kubelet[2614]: E0707 01:44:18.001990 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.003156 kubelet[2614]: E0707 01:44:18.002862 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.003156 kubelet[2614]: W0707 01:44:18.002979 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.003156 kubelet[2614]: E0707 01:44:18.003073 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.004866 kubelet[2614]: E0707 01:44:18.004590 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.004866 kubelet[2614]: W0707 01:44:18.004606 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.004866 kubelet[2614]: E0707 01:44:18.004755 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.006266 kubelet[2614]: E0707 01:44:18.005860 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.006548 kubelet[2614]: W0707 01:44:18.006389 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.006548 kubelet[2614]: E0707 01:44:18.006511 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.008400 kubelet[2614]: E0707 01:44:18.006874 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.008400 kubelet[2614]: W0707 01:44:18.006886 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.008532 kubelet[2614]: E0707 01:44:18.008500 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.009536 kubelet[2614]: E0707 01:44:18.009401 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.009638 kubelet[2614]: W0707 01:44:18.009549 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.010070 kubelet[2614]: E0707 01:44:18.009707 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.010607 kubelet[2614]: E0707 01:44:18.010563 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.010665 kubelet[2614]: W0707 01:44:18.010613 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.011128 kubelet[2614]: E0707 01:44:18.010859 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.012131 kubelet[2614]: E0707 01:44:18.012019 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.012131 kubelet[2614]: W0707 01:44:18.012036 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.013209 kubelet[2614]: E0707 01:44:18.013092 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.013670 kubelet[2614]: W0707 01:44:18.013440 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.013670 kubelet[2614]: E0707 01:44:18.013468 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.014612 kubelet[2614]: E0707 01:44:18.013102 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.015365 kubelet[2614]: E0707 01:44:18.015349 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.015489 kubelet[2614]: W0707 01:44:18.015461 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.015683 kubelet[2614]: E0707 01:44:18.015668 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.016473 kubelet[2614]: E0707 01:44:18.016357 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.016668 kubelet[2614]: W0707 01:44:18.016653 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.016814 kubelet[2614]: E0707 01:44:18.016743 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.820851 kubelet[2614]: E0707 01:44:18.820590 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:18.968839 kubelet[2614]: I0707 01:44:18.967866 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:44:18.993392 kubelet[2614]: E0707 01:44:18.993006 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.993392 kubelet[2614]: W0707 01:44:18.993048 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.993392 kubelet[2614]: E0707 01:44:18.993083 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.995436 kubelet[2614]: E0707 01:44:18.995399 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.995436 kubelet[2614]: W0707 01:44:18.995432 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.995876 kubelet[2614]: E0707 01:44:18.995457 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:18.997080 kubelet[2614]: E0707 01:44:18.997021 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:18.997186 kubelet[2614]: W0707 01:44:18.997108 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:18.997186 kubelet[2614]: E0707 01:44:18.997138 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.001609 kubelet[2614]: E0707 01:44:19.001393 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.001609 kubelet[2614]: W0707 01:44:19.001436 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.001609 kubelet[2614]: E0707 01:44:19.001467 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.003677 kubelet[2614]: E0707 01:44:19.002173 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.003677 kubelet[2614]: W0707 01:44:19.002208 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.003677 kubelet[2614]: E0707 01:44:19.002233 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.004754 kubelet[2614]: E0707 01:44:19.004711 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.005018 kubelet[2614]: W0707 01:44:19.004782 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.005018 kubelet[2614]: E0707 01:44:19.004814 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.005340 kubelet[2614]: E0707 01:44:19.005264 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.005418 kubelet[2614]: W0707 01:44:19.005344 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.005519 kubelet[2614]: E0707 01:44:19.005477 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.006054 kubelet[2614]: E0707 01:44:19.006015 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.006054 kubelet[2614]: W0707 01:44:19.006049 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.006432 kubelet[2614]: E0707 01:44:19.006073 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.010543 kubelet[2614]: E0707 01:44:19.009718 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.010543 kubelet[2614]: W0707 01:44:19.009757 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.010543 kubelet[2614]: E0707 01:44:19.009791 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.010543 kubelet[2614]: E0707 01:44:19.010406 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.010543 kubelet[2614]: W0707 01:44:19.010461 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.010543 kubelet[2614]: E0707 01:44:19.010488 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.011752 kubelet[2614]: E0707 01:44:19.011178 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.011752 kubelet[2614]: W0707 01:44:19.011256 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.011752 kubelet[2614]: E0707 01:44:19.011328 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.013743 kubelet[2614]: E0707 01:44:19.013556 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.013743 kubelet[2614]: W0707 01:44:19.013588 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.013743 kubelet[2614]: E0707 01:44:19.013613 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.015098 kubelet[2614]: E0707 01:44:19.014192 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.015098 kubelet[2614]: W0707 01:44:19.014216 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.015098 kubelet[2614]: E0707 01:44:19.014238 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.015098 kubelet[2614]: E0707 01:44:19.014627 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.015098 kubelet[2614]: W0707 01:44:19.014650 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.015098 kubelet[2614]: E0707 01:44:19.014671 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.017802 kubelet[2614]: E0707 01:44:19.017710 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.017802 kubelet[2614]: W0707 01:44:19.017748 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.017802 kubelet[2614]: E0707 01:44:19.017773 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.020348 kubelet[2614]: E0707 01:44:19.020193 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.020348 kubelet[2614]: W0707 01:44:19.020230 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.021051 kubelet[2614]: E0707 01:44:19.020847 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.021825 kubelet[2614]: E0707 01:44:19.021661 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.021825 kubelet[2614]: W0707 01:44:19.021702 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.021825 kubelet[2614]: E0707 01:44:19.021738 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.022226 kubelet[2614]: E0707 01:44:19.022121 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.022226 kubelet[2614]: W0707 01:44:19.022144 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.022500 kubelet[2614]: E0707 01:44:19.022391 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.023653 kubelet[2614]: E0707 01:44:19.023605 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.024232 kubelet[2614]: W0707 01:44:19.023893 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.024232 kubelet[2614]: E0707 01:44:19.024111 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.026016 kubelet[2614]: E0707 01:44:19.025601 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.026016 kubelet[2614]: W0707 01:44:19.025633 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.026016 kubelet[2614]: E0707 01:44:19.025668 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.027472 kubelet[2614]: E0707 01:44:19.026960 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.027472 kubelet[2614]: W0707 01:44:19.026992 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.027472 kubelet[2614]: E0707 01:44:19.027123 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.030142 kubelet[2614]: E0707 01:44:19.029728 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.030142 kubelet[2614]: W0707 01:44:19.029778 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.031924 kubelet[2614]: E0707 01:44:19.030974 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.033905 kubelet[2614]: E0707 01:44:19.033661 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.033905 kubelet[2614]: W0707 01:44:19.033695 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.036642 kubelet[2614]: E0707 01:44:19.034080 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.036642 kubelet[2614]: E0707 01:44:19.034983 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.036642 kubelet[2614]: W0707 01:44:19.035011 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.038426 kubelet[2614]: E0707 01:44:19.037368 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.038426 kubelet[2614]: E0707 01:44:19.037707 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.038426 kubelet[2614]: W0707 01:44:19.037737 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.038426 kubelet[2614]: E0707 01:44:19.037799 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.039884 kubelet[2614]: E0707 01:44:19.039793 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.040624 kubelet[2614]: W0707 01:44:19.040359 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.040624 kubelet[2614]: E0707 01:44:19.040448 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.040990 kubelet[2614]: E0707 01:44:19.040957 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.041166 kubelet[2614]: W0707 01:44:19.041135 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.041566 kubelet[2614]: E0707 01:44:19.041346 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.041870 kubelet[2614]: E0707 01:44:19.041838 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.042461 kubelet[2614]: W0707 01:44:19.042068 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.042461 kubelet[2614]: E0707 01:44:19.042126 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.042948 kubelet[2614]: E0707 01:44:19.042914 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.043162 kubelet[2614]: W0707 01:44:19.043131 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.043442 kubelet[2614]: E0707 01:44:19.043394 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.044470 kubelet[2614]: E0707 01:44:19.044021 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.044470 kubelet[2614]: W0707 01:44:19.044052 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.044470 kubelet[2614]: E0707 01:44:19.044093 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.045639 kubelet[2614]: E0707 01:44:19.045586 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.046000 kubelet[2614]: W0707 01:44:19.045883 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.046257 kubelet[2614]: E0707 01:44:19.046123 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.046736 kubelet[2614]: E0707 01:44:19.046712 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.047002 kubelet[2614]: W0707 01:44:19.046822 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.047002 kubelet[2614]: E0707 01:44:19.046859 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.047663 kubelet[2614]: E0707 01:44:19.047362 2614 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:44:19.047663 kubelet[2614]: W0707 01:44:19.047375 2614 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:44:19.047663 kubelet[2614]: E0707 01:44:19.047386 2614 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:44:19.258063 containerd[1463]: time="2025-07-07T01:44:19.257992962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:19.259846 containerd[1463]: time="2025-07-07T01:44:19.259803868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 01:44:19.261138 containerd[1463]: time="2025-07-07T01:44:19.261104047Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:19.264111 containerd[1463]: time="2025-07-07T01:44:19.264041086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:19.264915 containerd[1463]: time="2025-07-07T01:44:19.264868087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.15791304s" Jul 7 01:44:19.264915 containerd[1463]: time="2025-07-07T01:44:19.264909695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 01:44:19.268120 containerd[1463]: time="2025-07-07T01:44:19.268073981Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 01:44:19.290828 containerd[1463]: time="2025-07-07T01:44:19.290758699Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1\"" Jul 7 01:44:19.294015 containerd[1463]: time="2025-07-07T01:44:19.292996287Z" level=info msg="StartContainer for \"c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1\"" Jul 7 01:44:19.342581 systemd[1]: Started cri-containerd-c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1.scope - libcontainer container c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1. Jul 7 01:44:19.385105 containerd[1463]: time="2025-07-07T01:44:19.385040581Z" level=info msg="StartContainer for \"c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1\" returns successfully" Jul 7 01:44:19.402987 systemd[1]: cri-containerd-c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1.scope: Deactivated successfully. Jul 7 01:44:19.430798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1-rootfs.mount: Deactivated successfully. Jul 7 01:44:20.117450 containerd[1463]: time="2025-07-07T01:44:20.116492998Z" level=info msg="shim disconnected" id=c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1 namespace=k8s.io Jul 7 01:44:20.117450 containerd[1463]: time="2025-07-07T01:44:20.116767433Z" level=warning msg="cleaning up after shim disconnected" id=c9b8debff8fa7c6fac2020f037f4271cc1d3a2715d0c33da2cf60de34b7551f1 namespace=k8s.io Jul 7 01:44:20.117450 containerd[1463]: time="2025-07-07T01:44:20.116848335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:44:20.820415 kubelet[2614]: E0707 01:44:20.820259 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:20.991563 containerd[1463]: time="2025-07-07T01:44:20.991436490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 01:44:22.822577 kubelet[2614]: E0707 01:44:22.821098 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:24.821818 kubelet[2614]: E0707 01:44:24.821642 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:26.378720 containerd[1463]: time="2025-07-07T01:44:26.378513349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:26.380250 containerd[1463]: time="2025-07-07T01:44:26.379923624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 01:44:26.382142 containerd[1463]: time="2025-07-07T01:44:26.381728670Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:26.385141 containerd[1463]: time="2025-07-07T01:44:26.385080967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:26.386315 containerd[1463]: time="2025-07-07T01:44:26.386258426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.394741595s" Jul 7 01:44:26.386375 containerd[1463]: time="2025-07-07T01:44:26.386330571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 01:44:26.390896 containerd[1463]: time="2025-07-07T01:44:26.390721316Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 01:44:26.420678 containerd[1463]: time="2025-07-07T01:44:26.420612871Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d\"" Jul 7 01:44:26.422463 containerd[1463]: time="2025-07-07T01:44:26.421602377Z" level=info msg="StartContainer for \"361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d\"" Jul 7 01:44:26.491154 systemd[1]: run-containerd-runc-k8s.io-361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d-runc.H1HChs.mount: Deactivated successfully. Jul 7 01:44:26.504724 systemd[1]: Started cri-containerd-361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d.scope - libcontainer container 361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d. Jul 7 01:44:26.549336 containerd[1463]: time="2025-07-07T01:44:26.549156989Z" level=info msg="StartContainer for \"361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d\" returns successfully" Jul 7 01:44:26.822763 kubelet[2614]: E0707 01:44:26.822608 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:28.225078 systemd[1]: cri-containerd-361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d.scope: Deactivated successfully. Jul 7 01:44:28.226476 systemd[1]: cri-containerd-361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d.scope: Consumed 1.107s CPU time. Jul 7 01:44:28.268315 kubelet[2614]: I0707 01:44:28.265771 2614 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 01:44:28.334067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d-rootfs.mount: Deactivated successfully. Jul 7 01:44:28.360130 systemd[1]: Created slice kubepods-burstable-pod67505d5f_ec1f_4b25_9868_2da79cc2efec.slice - libcontainer container kubepods-burstable-pod67505d5f_ec1f_4b25_9868_2da79cc2efec.slice. Jul 7 01:44:28.375043 systemd[1]: Created slice kubepods-burstable-podf8bea45c_1889_4c85_82bc_48df27c16ca2.slice - libcontainer container kubepods-burstable-podf8bea45c_1889_4c85_82bc_48df27c16ca2.slice. Jul 7 01:44:28.663836 kubelet[2614]: I0707 01:44:28.403891 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/687776ff-b90b-4baa-af46-4023f495fb97-calico-apiserver-certs\") pod \"calico-apiserver-6fc4bb86bc-jljn9\" (UID: \"687776ff-b90b-4baa-af46-4023f495fb97\") " pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" Jul 7 01:44:28.663836 kubelet[2614]: I0707 01:44:28.404038 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32440117-c574-4788-812a-5d3b5496a9ed-calico-apiserver-certs\") pod \"calico-apiserver-6fc4bb86bc-5q8k9\" (UID: \"32440117-c574-4788-812a-5d3b5496a9ed\") " pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" Jul 7 01:44:28.663836 kubelet[2614]: I0707 01:44:28.404198 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tlk\" (UniqueName: \"kubernetes.io/projected/687776ff-b90b-4baa-af46-4023f495fb97-kube-api-access-j8tlk\") pod \"calico-apiserver-6fc4bb86bc-jljn9\" (UID: \"687776ff-b90b-4baa-af46-4023f495fb97\") " pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" Jul 7 01:44:28.663836 kubelet[2614]: I0707 01:44:28.404236 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7b0113c-461b-4b97-957e-70b2d17f2275-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-q2hdx\" (UID: \"b7b0113c-461b-4b97-957e-70b2d17f2275\") " pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:28.663836 kubelet[2614]: I0707 01:44:28.404427 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b7b0113c-461b-4b97-957e-70b2d17f2275-goldmane-key-pair\") pod \"goldmane-768f4c5c69-q2hdx\" (UID: \"b7b0113c-461b-4b97-957e-70b2d17f2275\") " pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:28.382716 systemd[1]: Created slice kubepods-besteffort-pod6e0044f3_eddc_404a_a8a5_e4a322e633c4.slice - libcontainer container kubepods-besteffort-pod6e0044f3_eddc_404a_a8a5_e4a322e633c4.slice. Jul 7 01:44:28.671695 kubelet[2614]: I0707 01:44:28.404496 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67505d5f-ec1f-4b25-9868-2da79cc2efec-config-volume\") pod \"coredns-668d6bf9bc-ns8xt\" (UID: \"67505d5f-ec1f-4b25-9868-2da79cc2efec\") " pod="kube-system/coredns-668d6bf9bc-ns8xt" Jul 7 01:44:28.671695 kubelet[2614]: I0707 01:44:28.404536 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e0044f3-eddc-404a-a8a5-e4a322e633c4-tigera-ca-bundle\") pod \"calico-kube-controllers-5677bcf49d-d72km\" (UID: \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\") " pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" Jul 7 01:44:28.671695 kubelet[2614]: I0707 01:44:28.404580 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e744049-be52-495d-b225-079659d54e9f-whisker-backend-key-pair\") pod \"whisker-7f84455d77-m9nc9\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " pod="calico-system/whisker-7f84455d77-m9nc9" Jul 7 01:44:28.671695 kubelet[2614]: I0707 01:44:28.404601 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8bea45c-1889-4c85-82bc-48df27c16ca2-config-volume\") pod \"coredns-668d6bf9bc-7th4f\" (UID: \"f8bea45c-1889-4c85-82bc-48df27c16ca2\") " pod="kube-system/coredns-668d6bf9bc-7th4f" Jul 7 01:44:28.671695 kubelet[2614]: I0707 01:44:28.404649 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcp56\" (UniqueName: \"kubernetes.io/projected/32440117-c574-4788-812a-5d3b5496a9ed-kube-api-access-kcp56\") pod \"calico-apiserver-6fc4bb86bc-5q8k9\" (UID: \"32440117-c574-4788-812a-5d3b5496a9ed\") " pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" Jul 7 01:44:28.389865 systemd[1]: Created slice kubepods-besteffort-pod32440117_c574_4788_812a_5d3b5496a9ed.slice - libcontainer container kubepods-besteffort-pod32440117_c574_4788_812a_5d3b5496a9ed.slice. Jul 7 01:44:28.672125 kubelet[2614]: I0707 01:44:28.404693 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq6pm\" (UniqueName: \"kubernetes.io/projected/6e0044f3-eddc-404a-a8a5-e4a322e633c4-kube-api-access-mq6pm\") pod \"calico-kube-controllers-5677bcf49d-d72km\" (UID: \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\") " pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" Jul 7 01:44:28.672125 kubelet[2614]: I0707 01:44:28.404738 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7wv7\" (UniqueName: \"kubernetes.io/projected/67505d5f-ec1f-4b25-9868-2da79cc2efec-kube-api-access-g7wv7\") pod \"coredns-668d6bf9bc-ns8xt\" (UID: \"67505d5f-ec1f-4b25-9868-2da79cc2efec\") " pod="kube-system/coredns-668d6bf9bc-ns8xt" Jul 7 01:44:28.672125 kubelet[2614]: I0707 01:44:28.404768 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e744049-be52-495d-b225-079659d54e9f-whisker-ca-bundle\") pod \"whisker-7f84455d77-m9nc9\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " pod="calico-system/whisker-7f84455d77-m9nc9" Jul 7 01:44:28.672125 kubelet[2614]: I0707 01:44:28.404790 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wrx\" (UniqueName: \"kubernetes.io/projected/f8bea45c-1889-4c85-82bc-48df27c16ca2-kube-api-access-c4wrx\") pod \"coredns-668d6bf9bc-7th4f\" (UID: \"f8bea45c-1889-4c85-82bc-48df27c16ca2\") " pod="kube-system/coredns-668d6bf9bc-7th4f" Jul 7 01:44:28.672125 kubelet[2614]: I0707 01:44:28.404843 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxw4m\" (UniqueName: \"kubernetes.io/projected/6e744049-be52-495d-b225-079659d54e9f-kube-api-access-qxw4m\") pod \"whisker-7f84455d77-m9nc9\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " pod="calico-system/whisker-7f84455d77-m9nc9" Jul 7 01:44:28.402738 systemd[1]: Created slice kubepods-besteffort-pod687776ff_b90b_4baa_af46_4023f495fb97.slice - libcontainer container kubepods-besteffort-pod687776ff_b90b_4baa_af46_4023f495fb97.slice. Jul 7 01:44:28.676311 kubelet[2614]: I0707 01:44:28.404876 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7b0113c-461b-4b97-957e-70b2d17f2275-config\") pod \"goldmane-768f4c5c69-q2hdx\" (UID: \"b7b0113c-461b-4b97-957e-70b2d17f2275\") " pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:28.676311 kubelet[2614]: I0707 01:44:28.404922 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvk2\" (UniqueName: \"kubernetes.io/projected/b7b0113c-461b-4b97-957e-70b2d17f2275-kube-api-access-4mvk2\") pod \"goldmane-768f4c5c69-q2hdx\" (UID: \"b7b0113c-461b-4b97-957e-70b2d17f2275\") " pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:28.409852 systemd[1]: Created slice kubepods-besteffort-podb7b0113c_461b_4b97_957e_70b2d17f2275.slice - libcontainer container kubepods-besteffort-podb7b0113c_461b_4b97_957e_70b2d17f2275.slice. Jul 7 01:44:28.416590 systemd[1]: Created slice kubepods-besteffort-pod6e744049_be52_495d_b225_079659d54e9f.slice - libcontainer container kubepods-besteffort-pod6e744049_be52_495d_b225_079659d54e9f.slice. Jul 7 01:44:28.842955 systemd[1]: Created slice kubepods-besteffort-pod9914e783_0422_4ffd_98e7_e3799124405f.slice - libcontainer container kubepods-besteffort-pod9914e783_0422_4ffd_98e7_e3799124405f.slice. Jul 7 01:44:28.855990 containerd[1463]: time="2025-07-07T01:44:28.855509353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lcqqq,Uid:9914e783-0422-4ffd-98e7-e3799124405f,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:28.964308 containerd[1463]: time="2025-07-07T01:44:28.963988029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns8xt,Uid:67505d5f-ec1f-4b25-9868-2da79cc2efec,Namespace:kube-system,Attempt:0,}" Jul 7 01:44:28.993071 containerd[1463]: time="2025-07-07T01:44:28.992727299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-5q8k9,Uid:32440117-c574-4788-812a-5d3b5496a9ed,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:44:28.995742 containerd[1463]: time="2025-07-07T01:44:28.994920390Z" level=info msg="shim disconnected" id=361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d namespace=k8s.io Jul 7 01:44:28.995742 containerd[1463]: time="2025-07-07T01:44:28.995359173Z" level=warning msg="cleaning up after shim disconnected" id=361985f191af8e85d98de2d87b85f2ee22c6a6fd050e51375e122af573998c3d namespace=k8s.io Jul 7 01:44:28.995742 containerd[1463]: time="2025-07-07T01:44:28.995375033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:44:29.003199 containerd[1463]: time="2025-07-07T01:44:29.003134875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5677bcf49d-d72km,Uid:6e0044f3-eddc-404a-a8a5-e4a322e633c4,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:29.003945 containerd[1463]: time="2025-07-07T01:44:29.003753138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-jljn9,Uid:687776ff-b90b-4baa-af46-4023f495fb97,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:44:29.005185 containerd[1463]: time="2025-07-07T01:44:29.004519120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7th4f,Uid:f8bea45c-1889-4c85-82bc-48df27c16ca2,Namespace:kube-system,Attempt:0,}" Jul 7 01:44:29.009311 containerd[1463]: time="2025-07-07T01:44:29.006739813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q2hdx,Uid:b7b0113c-461b-4b97-957e-70b2d17f2275,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:29.010472 containerd[1463]: time="2025-07-07T01:44:29.010422478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f84455d77-m9nc9,Uid:6e744049-be52-495d-b225-079659d54e9f,Namespace:calico-system,Attempt:0,}" Jul 7 01:44:29.040359 containerd[1463]: time="2025-07-07T01:44:29.040270814Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:44:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:44:29.242527 containerd[1463]: time="2025-07-07T01:44:29.241818878Z" level=error msg="Failed to destroy network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.242527 containerd[1463]: time="2025-07-07T01:44:29.242302124Z" level=error msg="encountered an error cleaning up failed sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.242527 containerd[1463]: time="2025-07-07T01:44:29.242427482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lcqqq,Uid:9914e783-0422-4ffd-98e7-e3799124405f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.248026 kubelet[2614]: E0707 01:44:29.246514 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.248026 kubelet[2614]: E0707 01:44:29.246710 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:29.248026 kubelet[2614]: E0707 01:44:29.246798 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lcqqq" Jul 7 01:44:29.248272 kubelet[2614]: E0707 01:44:29.246893 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lcqqq_calico-system(9914e783-0422-4ffd-98e7-e3799124405f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lcqqq_calico-system(9914e783-0422-4ffd-98e7-e3799124405f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:29.293122 containerd[1463]: time="2025-07-07T01:44:29.293033861Z" level=error msg="Failed to destroy network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.294583 containerd[1463]: time="2025-07-07T01:44:29.294525691Z" level=error msg="encountered an error cleaning up failed sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.294707 containerd[1463]: time="2025-07-07T01:44:29.294613718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-5q8k9,Uid:32440117-c574-4788-812a-5d3b5496a9ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.295736 kubelet[2614]: E0707 01:44:29.294958 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.295736 kubelet[2614]: E0707 01:44:29.295063 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" Jul 7 01:44:29.295736 kubelet[2614]: E0707 01:44:29.295109 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" Jul 7 01:44:29.296139 kubelet[2614]: E0707 01:44:29.295527 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc4bb86bc-5q8k9_calico-apiserver(32440117-c574-4788-812a-5d3b5496a9ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc4bb86bc-5q8k9_calico-apiserver(32440117-c574-4788-812a-5d3b5496a9ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" podUID="32440117-c574-4788-812a-5d3b5496a9ed" Jul 7 01:44:29.299406 containerd[1463]: time="2025-07-07T01:44:29.299156836Z" level=error msg="Failed to destroy network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.300193 containerd[1463]: time="2025-07-07T01:44:29.300164256Z" level=error msg="encountered an error cleaning up failed sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.300475 containerd[1463]: time="2025-07-07T01:44:29.300402999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns8xt,Uid:67505d5f-ec1f-4b25-9868-2da79cc2efec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.300825 kubelet[2614]: E0707 01:44:29.300782 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.301248 kubelet[2614]: E0707 01:44:29.300957 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ns8xt" Jul 7 01:44:29.301248 kubelet[2614]: E0707 01:44:29.301122 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ns8xt" Jul 7 01:44:29.301248 kubelet[2614]: E0707 01:44:29.301195 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ns8xt_kube-system(67505d5f-ec1f-4b25-9868-2da79cc2efec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ns8xt_kube-system(67505d5f-ec1f-4b25-9868-2da79cc2efec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ns8xt" podUID="67505d5f-ec1f-4b25-9868-2da79cc2efec" Jul 7 01:44:29.339658 containerd[1463]: time="2025-07-07T01:44:29.339391151Z" level=error msg="Failed to destroy network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.342997 containerd[1463]: time="2025-07-07T01:44:29.340027257Z" level=error msg="encountered an error cleaning up failed sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.342997 containerd[1463]: time="2025-07-07T01:44:29.340090578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7th4f,Uid:f8bea45c-1889-4c85-82bc-48df27c16ca2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.343101 kubelet[2614]: E0707 01:44:29.342519 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.343101 kubelet[2614]: E0707 01:44:29.342615 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7th4f" Jul 7 01:44:29.343101 kubelet[2614]: E0707 01:44:29.342663 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7th4f" Jul 7 01:44:29.343473 kubelet[2614]: E0707 01:44:29.343394 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7th4f_kube-system(f8bea45c-1889-4c85-82bc-48df27c16ca2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7th4f_kube-system(f8bea45c-1889-4c85-82bc-48df27c16ca2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7th4f" podUID="f8bea45c-1889-4c85-82bc-48df27c16ca2" Jul 7 01:44:29.350779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e-shm.mount: Deactivated successfully. Jul 7 01:44:29.357181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb-shm.mount: Deactivated successfully. Jul 7 01:44:29.423549 containerd[1463]: time="2025-07-07T01:44:29.423474309Z" level=error msg="Failed to destroy network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.426414 containerd[1463]: time="2025-07-07T01:44:29.423883335Z" level=error msg="encountered an error cleaning up failed sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.426414 containerd[1463]: time="2025-07-07T01:44:29.423971943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5677bcf49d-d72km,Uid:6e0044f3-eddc-404a-a8a5-e4a322e633c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.426547 kubelet[2614]: E0707 01:44:29.424367 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.427721 kubelet[2614]: E0707 01:44:29.426648 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" Jul 7 01:44:29.427721 kubelet[2614]: E0707 01:44:29.426681 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" Jul 7 01:44:29.427048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c-shm.mount: Deactivated successfully. Jul 7 01:44:29.428597 kubelet[2614]: E0707 01:44:29.426769 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5677bcf49d-d72km_calico-system(6e0044f3-eddc-404a-a8a5-e4a322e633c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5677bcf49d-d72km_calico-system(6e0044f3-eddc-404a-a8a5-e4a322e633c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" podUID="6e0044f3-eddc-404a-a8a5-e4a322e633c4" Jul 7 01:44:29.431044 containerd[1463]: time="2025-07-07T01:44:29.430980466Z" level=error msg="Failed to destroy network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.434424 containerd[1463]: time="2025-07-07T01:44:29.433426837Z" level=error msg="Failed to destroy network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.435657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d-shm.mount: Deactivated successfully. Jul 7 01:44:29.436036 containerd[1463]: time="2025-07-07T01:44:29.435910017Z" level=error msg="Failed to destroy network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.437033 containerd[1463]: time="2025-07-07T01:44:29.436787091Z" level=error msg="encountered an error cleaning up failed sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.437033 containerd[1463]: time="2025-07-07T01:44:29.436924461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-jljn9,Uid:687776ff-b90b-4baa-af46-4023f495fb97,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.440316 containerd[1463]: time="2025-07-07T01:44:29.439962042Z" level=error msg="encountered an error cleaning up failed sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.440316 containerd[1463]: time="2025-07-07T01:44:29.440105215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f84455d77-m9nc9,Uid:6e744049-be52-495d-b225-079659d54e9f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.440421 kubelet[2614]: E0707 01:44:29.439366 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.440421 kubelet[2614]: E0707 01:44:29.439492 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" Jul 7 01:44:29.440421 kubelet[2614]: E0707 01:44:29.439516 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" Jul 7 01:44:29.440521 kubelet[2614]: E0707 01:44:29.439575 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fc4bb86bc-jljn9_calico-apiserver(687776ff-b90b-4baa-af46-4023f495fb97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fc4bb86bc-jljn9_calico-apiserver(687776ff-b90b-4baa-af46-4023f495fb97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" podUID="687776ff-b90b-4baa-af46-4023f495fb97" Jul 7 01:44:29.441790 containerd[1463]: time="2025-07-07T01:44:29.441154865Z" level=error msg="encountered an error cleaning up failed sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.441790 containerd[1463]: time="2025-07-07T01:44:29.441443653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q2hdx,Uid:b7b0113c-461b-4b97-957e-70b2d17f2275,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.443018 kubelet[2614]: E0707 01:44:29.441240 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.443018 kubelet[2614]: E0707 01:44:29.441621 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f84455d77-m9nc9" Jul 7 01:44:29.443018 kubelet[2614]: E0707 01:44:29.442444 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f84455d77-m9nc9" Jul 7 01:44:29.442544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1-shm.mount: Deactivated successfully. Jul 7 01:44:29.444853 kubelet[2614]: E0707 01:44:29.443422 2614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:29.444853 kubelet[2614]: E0707 01:44:29.443478 2614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:29.444853 kubelet[2614]: E0707 01:44:29.443512 2614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-q2hdx" Jul 7 01:44:29.442685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16-shm.mount: Deactivated successfully. Jul 7 01:44:29.446339 kubelet[2614]: E0707 01:44:29.445744 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-q2hdx_calico-system(b7b0113c-461b-4b97-957e-70b2d17f2275)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-q2hdx_calico-system(b7b0113c-461b-4b97-957e-70b2d17f2275)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-q2hdx" podUID="b7b0113c-461b-4b97-957e-70b2d17f2275" Jul 7 01:44:29.448188 kubelet[2614]: E0707 01:44:29.446643 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f84455d77-m9nc9_calico-system(6e744049-be52-495d-b225-079659d54e9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f84455d77-m9nc9_calico-system(6e744049-be52-495d-b225-079659d54e9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f84455d77-m9nc9" podUID="6e744049-be52-495d-b225-079659d54e9f" Jul 7 01:44:29.729988 kubelet[2614]: I0707 01:44:29.729850 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:44:30.032457 kubelet[2614]: I0707 01:44:30.030530 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:30.037632 containerd[1463]: time="2025-07-07T01:44:30.036349340Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:44:30.043227 containerd[1463]: time="2025-07-07T01:44:30.042161822Z" level=info msg="Ensure that sandbox f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c in task-service has been cleanup successfully" Jul 7 01:44:30.053373 kubelet[2614]: I0707 01:44:30.052356 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:30.054619 containerd[1463]: time="2025-07-07T01:44:30.054537138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 01:44:30.064337 containerd[1463]: time="2025-07-07T01:44:30.064119989Z" level=info msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" Jul 7 01:44:30.067371 containerd[1463]: time="2025-07-07T01:44:30.066212907Z" level=info msg="Ensure that sandbox 17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1 in task-service has been cleanup successfully" Jul 7 01:44:30.098544 kubelet[2614]: I0707 01:44:30.098490 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:30.100035 containerd[1463]: time="2025-07-07T01:44:30.099993292Z" level=info msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" Jul 7 01:44:30.100841 containerd[1463]: time="2025-07-07T01:44:30.100793759Z" level=info msg="Ensure that sandbox 1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb in task-service has been cleanup successfully" Jul 7 01:44:30.116460 kubelet[2614]: I0707 01:44:30.116259 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:30.120762 containerd[1463]: time="2025-07-07T01:44:30.120693093Z" level=info msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" Jul 7 01:44:30.121245 containerd[1463]: time="2025-07-07T01:44:30.121216045Z" level=info msg="Ensure that sandbox 81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290 in task-service has been cleanup successfully" Jul 7 01:44:30.126106 kubelet[2614]: I0707 01:44:30.126079 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:30.131391 containerd[1463]: time="2025-07-07T01:44:30.131205837Z" level=info msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" Jul 7 01:44:30.132608 containerd[1463]: time="2025-07-07T01:44:30.132570875Z" level=info msg="Ensure that sandbox 66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d in task-service has been cleanup successfully" Jul 7 01:44:30.143819 kubelet[2614]: I0707 01:44:30.142529 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:30.144245 containerd[1463]: time="2025-07-07T01:44:30.144210426Z" level=info msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" Jul 7 01:44:30.145576 containerd[1463]: time="2025-07-07T01:44:30.145552641Z" level=info msg="Ensure that sandbox 9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16 in task-service has been cleanup successfully" Jul 7 01:44:30.148148 kubelet[2614]: I0707 01:44:30.148120 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:30.151478 containerd[1463]: time="2025-07-07T01:44:30.151424005Z" level=info msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" Jul 7 01:44:30.152917 containerd[1463]: time="2025-07-07T01:44:30.152888972Z" level=info msg="Ensure that sandbox b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460 in task-service has been cleanup successfully" Jul 7 01:44:30.154926 kubelet[2614]: I0707 01:44:30.154345 2614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:30.158007 containerd[1463]: time="2025-07-07T01:44:30.157956712Z" level=info msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" Jul 7 01:44:30.160097 containerd[1463]: time="2025-07-07T01:44:30.160062966Z" level=info msg="Ensure that sandbox 185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e in task-service has been cleanup successfully" Jul 7 01:44:30.214824 containerd[1463]: time="2025-07-07T01:44:30.214740465Z" level=error msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" failed" error="failed to destroy network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.215445 kubelet[2614]: E0707 01:44:30.215356 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:30.215932 kubelet[2614]: E0707 01:44:30.215611 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1"} Jul 7 01:44:30.215932 kubelet[2614]: E0707 01:44:30.215762 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e744049-be52-495d-b225-079659d54e9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.215932 kubelet[2614]: E0707 01:44:30.215796 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e744049-be52-495d-b225-079659d54e9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f84455d77-m9nc9" podUID="6e744049-be52-495d-b225-079659d54e9f" Jul 7 01:44:30.222630 containerd[1463]: time="2025-07-07T01:44:30.222507864Z" level=error msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" failed" error="failed to destroy network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.223317 kubelet[2614]: E0707 01:44:30.222855 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:30.223317 kubelet[2614]: E0707 01:44:30.222904 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb"} Jul 7 01:44:30.223317 kubelet[2614]: E0707 01:44:30.222945 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8bea45c-1889-4c85-82bc-48df27c16ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.223317 kubelet[2614]: E0707 01:44:30.222972 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8bea45c-1889-4c85-82bc-48df27c16ca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7th4f" podUID="f8bea45c-1889-4c85-82bc-48df27c16ca2" Jul 7 01:44:30.236239 containerd[1463]: time="2025-07-07T01:44:30.236120465Z" level=error msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" failed" error="failed to destroy network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.236943 kubelet[2614]: E0707 01:44:30.236656 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:30.237199 kubelet[2614]: E0707 01:44:30.236730 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c"} Jul 7 01:44:30.237199 kubelet[2614]: E0707 01:44:30.237122 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.237199 kubelet[2614]: E0707 01:44:30.237159 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" podUID="6e0044f3-eddc-404a-a8a5-e4a322e633c4" Jul 7 01:44:30.272860 containerd[1463]: time="2025-07-07T01:44:30.272753409Z" level=error msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" failed" error="failed to destroy network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.273138 containerd[1463]: time="2025-07-07T01:44:30.272949260Z" level=error msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" failed" error="failed to destroy network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.273183 kubelet[2614]: E0707 01:44:30.273090 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:30.273183 kubelet[2614]: E0707 01:44:30.273156 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290"} Jul 7 01:44:30.273299 kubelet[2614]: E0707 01:44:30.273211 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32440117-c574-4788-812a-5d3b5496a9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.274210 kubelet[2614]: E0707 01:44:30.273253 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32440117-c574-4788-812a-5d3b5496a9ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" podUID="32440117-c574-4788-812a-5d3b5496a9ed" Jul 7 01:44:30.275524 kubelet[2614]: E0707 01:44:30.275482 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:30.275850 kubelet[2614]: E0707 01:44:30.275804 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d"} Jul 7 01:44:30.275989 kubelet[2614]: E0707 01:44:30.275963 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7b0113c-461b-4b97-957e-70b2d17f2275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.276130 kubelet[2614]: E0707 01:44:30.276104 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7b0113c-461b-4b97-957e-70b2d17f2275\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-q2hdx" podUID="b7b0113c-461b-4b97-957e-70b2d17f2275" Jul 7 01:44:30.285643 containerd[1463]: time="2025-07-07T01:44:30.285440206Z" level=error msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" failed" error="failed to destroy network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.286014 kubelet[2614]: E0707 01:44:30.285965 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:30.286096 kubelet[2614]: E0707 01:44:30.286034 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460"} Jul 7 01:44:30.286096 kubelet[2614]: E0707 01:44:30.286087 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67505d5f-ec1f-4b25-9868-2da79cc2efec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.286201 kubelet[2614]: E0707 01:44:30.286120 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67505d5f-ec1f-4b25-9868-2da79cc2efec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ns8xt" podUID="67505d5f-ec1f-4b25-9868-2da79cc2efec" Jul 7 01:44:30.290137 containerd[1463]: time="2025-07-07T01:44:30.290011404Z" level=error msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" failed" error="failed to destroy network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.290612 kubelet[2614]: E0707 01:44:30.290352 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:30.290612 kubelet[2614]: E0707 01:44:30.290412 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e"} Jul 7 01:44:30.290612 kubelet[2614]: E0707 01:44:30.290452 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9914e783-0422-4ffd-98e7-e3799124405f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.290612 kubelet[2614]: E0707 01:44:30.290483 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9914e783-0422-4ffd-98e7-e3799124405f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lcqqq" podUID="9914e783-0422-4ffd-98e7-e3799124405f" Jul 7 01:44:30.296132 containerd[1463]: time="2025-07-07T01:44:30.296078649Z" level=error msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" failed" error="failed to destroy network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:30.296614 kubelet[2614]: E0707 01:44:30.296368 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:30.296614 kubelet[2614]: E0707 01:44:30.296437 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16"} Jul 7 01:44:30.296614 kubelet[2614]: E0707 01:44:30.296486 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"687776ff-b90b-4baa-af46-4023f495fb97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:30.296614 kubelet[2614]: E0707 01:44:30.296516 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"687776ff-b90b-4baa-af46-4023f495fb97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" podUID="687776ff-b90b-4baa-af46-4023f495fb97" Jul 7 01:44:40.827128 containerd[1463]: time="2025-07-07T01:44:40.825640165Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:44:40.878885 containerd[1463]: time="2025-07-07T01:44:40.878815492Z" level=error msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" failed" error="failed to destroy network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:44:40.879944 kubelet[2614]: E0707 01:44:40.879098 2614 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:40.879944 kubelet[2614]: E0707 01:44:40.879257 2614 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c"} Jul 7 01:44:40.879944 kubelet[2614]: E0707 01:44:40.879392 2614 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 01:44:40.879944 kubelet[2614]: E0707 01:44:40.879426 2614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6e0044f3-eddc-404a-a8a5-e4a322e633c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" podUID="6e0044f3-eddc-404a-a8a5-e4a322e633c4" Jul 7 01:44:41.241065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831840614.mount: Deactivated successfully. Jul 7 01:44:41.276884 containerd[1463]: time="2025-07-07T01:44:41.275980006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:41.278720 containerd[1463]: time="2025-07-07T01:44:41.278683337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 01:44:41.280831 containerd[1463]: time="2025-07-07T01:44:41.280801633Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:41.284028 containerd[1463]: time="2025-07-07T01:44:41.284002636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:41.285478 containerd[1463]: time="2025-07-07T01:44:41.285449371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.230831791s" Jul 7 01:44:41.285711 containerd[1463]: time="2025-07-07T01:44:41.285567354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 01:44:41.335099 containerd[1463]: time="2025-07-07T01:44:41.334804352Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 01:44:41.376708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416309107.mount: Deactivated successfully. Jul 7 01:44:41.385794 containerd[1463]: time="2025-07-07T01:44:41.385724933Z" level=info msg="CreateContainer within sandbox \"d809bcc37edaa2bcdc1710c45821be1f7dae6da90e708e485d810a13e98da3ad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c27e75149cb408988323c668e00669d071634263d252ea2b886f738f98c1bfcb\"" Jul 7 01:44:41.387422 containerd[1463]: time="2025-07-07T01:44:41.387374773Z" level=info msg="StartContainer for \"c27e75149cb408988323c668e00669d071634263d252ea2b886f738f98c1bfcb\"" Jul 7 01:44:41.454571 systemd[1]: Started cri-containerd-c27e75149cb408988323c668e00669d071634263d252ea2b886f738f98c1bfcb.scope - libcontainer container c27e75149cb408988323c668e00669d071634263d252ea2b886f738f98c1bfcb. Jul 7 01:44:41.516127 containerd[1463]: time="2025-07-07T01:44:41.515637203Z" level=info msg="StartContainer for \"c27e75149cb408988323c668e00669d071634263d252ea2b886f738f98c1bfcb\" returns successfully" Jul 7 01:44:41.639440 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 01:44:41.640735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 01:44:41.826426 containerd[1463]: time="2025-07-07T01:44:41.824704151Z" level=info msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" Jul 7 01:44:41.842988 containerd[1463]: time="2025-07-07T01:44:41.842937146Z" level=info msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" Jul 7 01:44:41.844427 containerd[1463]: time="2025-07-07T01:44:41.844385224Z" level=info msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.091 [INFO][3886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.091 [INFO][3886] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" iface="eth0" netns="/var/run/netns/cni-5037f52c-3dcd-cc64-8c8a-d595707278ce" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.093 [INFO][3886] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" iface="eth0" netns="/var/run/netns/cni-5037f52c-3dcd-cc64-8c8a-d595707278ce" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.096 [INFO][3886] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" iface="eth0" netns="/var/run/netns/cni-5037f52c-3dcd-cc64-8c8a-d595707278ce" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.096 [INFO][3886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.096 [INFO][3886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.165 [INFO][3915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.168 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.170 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.187 [WARNING][3915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.191 [INFO][3915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.206 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:42.217336 containerd[1463]: 2025-07-07 01:44:42.213 [INFO][3886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:42.219906 containerd[1463]: time="2025-07-07T01:44:42.219403977Z" level=info msg="TearDown network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" successfully" Jul 7 01:44:42.219906 containerd[1463]: time="2025-07-07T01:44:42.219447058Z" level=info msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" returns successfully" Jul 7 01:44:42.222115 containerd[1463]: time="2025-07-07T01:44:42.222051892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q2hdx,Uid:b7b0113c-461b-4b97-957e-70b2d17f2275,Namespace:calico-system,Attempt:1,}" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.082 [INFO][3888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.082 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" iface="eth0" netns="/var/run/netns/cni-59e9543e-cb76-46b8-06e2-4a063a0c522a" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.083 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" iface="eth0" netns="/var/run/netns/cni-59e9543e-cb76-46b8-06e2-4a063a0c522a" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.085 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" iface="eth0" netns="/var/run/netns/cni-59e9543e-cb76-46b8-06e2-4a063a0c522a" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.085 [INFO][3888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.085 [INFO][3888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.171 [INFO][3911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.172 [INFO][3911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.206 [INFO][3911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.226 [WARNING][3911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.226 [INFO][3911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.228 [INFO][3911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:42.236351 containerd[1463]: 2025-07-07 01:44:42.231 [INFO][3888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:42.239250 containerd[1463]: time="2025-07-07T01:44:42.237614352Z" level=info msg="TearDown network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" successfully" Jul 7 01:44:42.239250 containerd[1463]: time="2025-07-07T01:44:42.238361063Z" level=info msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" returns successfully" Jul 7 01:44:42.243383 containerd[1463]: time="2025-07-07T01:44:42.242686680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lcqqq,Uid:9914e783-0422-4ffd-98e7-e3799124405f,Namespace:calico-system,Attempt:1,}" Jul 7 01:44:42.247893 systemd[1]: run-netns-cni\x2d5037f52c\x2d3dcd\x2dcc64\x2d8c8a\x2dd595707278ce.mount: Deactivated successfully. Jul 7 01:44:42.250443 systemd[1]: run-netns-cni\x2d59e9543e\x2dcb76\x2d46b8\x2d06e2\x2d4a063a0c522a.mount: Deactivated successfully. Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.089 [INFO][3887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.090 [INFO][3887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" iface="eth0" netns="/var/run/netns/cni-28f38128-28d0-1a5e-6893-40adebe73f14" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.091 [INFO][3887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" iface="eth0" netns="/var/run/netns/cni-28f38128-28d0-1a5e-6893-40adebe73f14" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.093 [INFO][3887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" iface="eth0" netns="/var/run/netns/cni-28f38128-28d0-1a5e-6893-40adebe73f14" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.093 [INFO][3887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.093 [INFO][3887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.203 [INFO][3913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.205 [INFO][3913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.229 [INFO][3913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.258 [WARNING][3913] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.258 [INFO][3913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.261 [INFO][3913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:42.267785 containerd[1463]: 2025-07-07 01:44:42.263 [INFO][3887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:42.270955 containerd[1463]: time="2025-07-07T01:44:42.268125299Z" level=info msg="TearDown network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" successfully" Jul 7 01:44:42.270955 containerd[1463]: time="2025-07-07T01:44:42.268188359Z" level=info msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" returns successfully" Jul 7 01:44:42.272011 systemd[1]: run-netns-cni\x2d28f38128\x2d28d0\x2d1a5e\x2d6893\x2d40adebe73f14.mount: Deactivated successfully. Jul 7 01:44:42.597517 containerd[1463]: time="2025-07-07T01:44:42.597465898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f84455d77-m9nc9,Uid:6e744049-be52-495d-b225-079659d54e9f,Namespace:calico-system,Attempt:1,}" Jul 7 01:44:42.641440 kubelet[2614]: I0707 01:44:42.640028 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t72qh" podStartSLOduration=2.156284835 podStartE2EDuration="29.639974747s" podCreationTimestamp="2025-07-07 01:44:13 +0000 UTC" firstStartedPulling="2025-07-07 01:44:13.803543392 +0000 UTC m=+20.140534636" lastFinishedPulling="2025-07-07 01:44:41.287233294 +0000 UTC m=+47.624224548" observedRunningTime="2025-07-07 01:44:42.632603433 +0000 UTC m=+48.969594707" watchObservedRunningTime="2025-07-07 01:44:42.639974747 +0000 UTC m=+48.976965991" Jul 7 01:44:42.822023 containerd[1463]: time="2025-07-07T01:44:42.821957760Z" level=info msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" Jul 7 01:44:43.074862 systemd-networkd[1378]: cali3190f25df87: Link UP Jul 7 01:44:43.075801 systemd-networkd[1378]: cali3190f25df87: Gained carrier Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.739 [INFO][3966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.790 [INFO][3966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0 whisker-7f84455d77- calico-system 6e744049-be52-495d-b225-079659d54e9f 911 0 2025-07-07 01:44:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f84455d77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal whisker-7f84455d77-m9nc9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3190f25df87 [] [] }} ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.791 [INFO][3966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.931 [INFO][3986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.931 [INFO][3986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039cd90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"whisker-7f84455d77-m9nc9", "timestamp":"2025-07-07 01:44:42.931419097 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.931 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.931 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.931 [INFO][3986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.951 [INFO][3986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.977 [INFO][3986] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.989 [INFO][3986] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:42.998 [INFO][3986] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.004 [INFO][3986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.005 [INFO][3986] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.009 [INFO][3986] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.017 [INFO][3986] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.034 [INFO][3986] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.1/26] block=192.168.2.0/26 handle="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.034 [INFO][3986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.1/26] handle="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.034 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:43.130699 containerd[1463]: 2025-07-07 01:44:43.034 [INFO][3986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.1/26] IPv6=[] ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.040 [INFO][3966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0", GenerateName:"whisker-7f84455d77-", Namespace:"calico-system", SelfLink:"", UID:"6e744049-be52-495d-b225-079659d54e9f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f84455d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"whisker-7f84455d77-m9nc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3190f25df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.040 [INFO][3966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.1/32] ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.040 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3190f25df87 ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.086 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.088 [INFO][3966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0", GenerateName:"whisker-7f84455d77-", Namespace:"calico-system", SelfLink:"", UID:"6e744049-be52-495d-b225-079659d54e9f", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f84455d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b", Pod:"whisker-7f84455d77-m9nc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3190f25df87", MAC:"9a:a3:55:23:93:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.132938 containerd[1463]: 2025-07-07 01:44:43.119 [INFO][3966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Namespace="calico-system" Pod="whisker-7f84455d77-m9nc9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:43.175430 systemd-networkd[1378]: calib09dfc335fb: Link UP Jul 7 01:44:43.175836 systemd-networkd[1378]: calib09dfc335fb: Gained carrier Jul 7 01:44:43.199415 containerd[1463]: time="2025-07-07T01:44:43.198572430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:43.213593 containerd[1463]: time="2025-07-07T01:44:43.209415843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:43.213593 containerd[1463]: time="2025-07-07T01:44:43.209458865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.213593 containerd[1463]: time="2025-07-07T01:44:43.209622053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.764 [INFO][3932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.809 [INFO][3932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0 goldmane-768f4c5c69- calico-system b7b0113c-461b-4b97-957e-70b2d17f2275 910 0 2025-07-07 01:44:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal goldmane-768f4c5c69-q2hdx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib09dfc335fb [] [] }} ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.810 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.926 [INFO][3998] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" HandleID="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.935 [INFO][3998] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" HandleID="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000271940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"goldmane-768f4c5c69-q2hdx", "timestamp":"2025-07-07 01:44:42.92626324 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:42.935 [INFO][3998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.034 [INFO][3998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.035 [INFO][3998] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.056 [INFO][3998] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.091 [INFO][3998] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.121 [INFO][3998] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.135 [INFO][3998] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.139 [INFO][3998] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.139 [INFO][3998] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.142 [INFO][3998] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2 Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.152 [INFO][3998] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.165 [INFO][3998] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.2/26] block=192.168.2.0/26 handle="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.166 [INFO][3998] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.2/26] handle="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.166 [INFO][3998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:43.222699 containerd[1463]: 2025-07-07 01:44:43.166 [INFO][3998] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.2/26] IPv6=[] ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" HandleID="k8s-pod-network.ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.170 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b7b0113c-461b-4b97-957e-70b2d17f2275", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"goldmane-768f4c5c69-q2hdx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib09dfc335fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.170 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.2/32] ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.170 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib09dfc335fb ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.176 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.187 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b7b0113c-461b-4b97-957e-70b2d17f2275", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2", Pod:"goldmane-768f4c5c69-q2hdx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib09dfc335fb", MAC:"de:dd:88:85:06:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.223482 containerd[1463]: 2025-07-07 01:44:43.213 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2" Namespace="calico-system" Pod="goldmane-768f4c5c69-q2hdx" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:43.322514 systemd-networkd[1378]: cali4e6968ca8aa: Link UP Jul 7 01:44:43.343149 systemd[1]: Started cri-containerd-25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b.scope - libcontainer container 25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b. Jul 7 01:44:43.346544 systemd-networkd[1378]: cali4e6968ca8aa: Gained carrier Jul 7 01:44:43.386097 containerd[1463]: time="2025-07-07T01:44:43.385127453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:43.386097 containerd[1463]: time="2025-07-07T01:44:43.385221690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:43.386097 containerd[1463]: time="2025-07-07T01:44:43.385239023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.996 [INFO][4010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.997 [INFO][4010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" iface="eth0" netns="/var/run/netns/cni-8d79a707-c844-b623-576c-3f9ba10bf876" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.997 [INFO][4010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" iface="eth0" netns="/var/run/netns/cni-8d79a707-c844-b623-576c-3f9ba10bf876" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.998 [INFO][4010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" iface="eth0" netns="/var/run/netns/cni-8d79a707-c844-b623-576c-3f9ba10bf876" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.998 [INFO][4010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:42.998 [INFO][4010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.127 [INFO][4033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.127 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.284 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.352 [WARNING][4033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.353 [INFO][4033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.364 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:43.389866 containerd[1463]: 2025-07-07 01:44:43.371 [INFO][4010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:43.392143 containerd[1463]: time="2025-07-07T01:44:43.390521206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.392675 containerd[1463]: time="2025-07-07T01:44:43.392472916Z" level=info msg="TearDown network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" successfully" Jul 7 01:44:43.392675 containerd[1463]: time="2025-07-07T01:44:43.392522138Z" level=info msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" returns successfully" Jul 7 01:44:43.395707 systemd[1]: run-netns-cni\x2d8d79a707\x2dc844\x2db623\x2d576c\x2d3f9ba10bf876.mount: Deactivated successfully. Jul 7 01:44:43.397391 containerd[1463]: time="2025-07-07T01:44:43.397136088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-jljn9,Uid:687776ff-b90b-4baa-af46-4023f495fb97,Namespace:calico-apiserver,Attempt:1,}" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.765 [INFO][3938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.806 [INFO][3938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0 csi-node-driver- calico-system 9914e783-0422-4ffd-98e7-e3799124405f 909 0 2025-07-07 01:44:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal csi-node-driver-lcqqq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e6968ca8aa [] [] }} ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.806 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.963 [INFO][3997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" HandleID="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.966 [INFO][3997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" HandleID="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"csi-node-driver-lcqqq", "timestamp":"2025-07-07 01:44:42.958358012 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:42.966 [INFO][3997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.166 [INFO][3997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.166 [INFO][3997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.190 [INFO][3997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.221 [INFO][3997] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.236 [INFO][3997] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.247 [INFO][3997] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.258 [INFO][3997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.258 [INFO][3997] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.261 [INFO][3997] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7 Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.273 [INFO][3997] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.284 [INFO][3997] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.3/26] block=192.168.2.0/26 handle="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.284 [INFO][3997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.3/26] handle="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.284 [INFO][3997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:43.398862 containerd[1463]: 2025-07-07 01:44:43.284 [INFO][3997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.3/26] IPv6=[] ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" HandleID="k8s-pod-network.d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.287 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9914e783-0422-4ffd-98e7-e3799124405f", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"csi-node-driver-lcqqq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e6968ca8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.288 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.3/32] ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.288 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e6968ca8aa ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.345 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.351 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9914e783-0422-4ffd-98e7-e3799124405f", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7", Pod:"csi-node-driver-lcqqq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e6968ca8aa", MAC:"5e:7a:d0:68:25:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:43.402635 containerd[1463]: 2025-07-07 01:44:43.374 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7" Namespace="calico-system" Pod="csi-node-driver-lcqqq" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:43.484115 systemd[1]: Started cri-containerd-ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2.scope - libcontainer container ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2. Jul 7 01:44:43.508837 containerd[1463]: time="2025-07-07T01:44:43.508058815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:43.508837 containerd[1463]: time="2025-07-07T01:44:43.508170286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:43.508837 containerd[1463]: time="2025-07-07T01:44:43.508190915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.514303 containerd[1463]: time="2025-07-07T01:44:43.514007829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:43.598682 systemd[1]: Started cri-containerd-d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7.scope - libcontainer container d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7. Jul 7 01:44:43.636166 containerd[1463]: time="2025-07-07T01:44:43.636093724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f84455d77-m9nc9,Uid:6e744049-be52-495d-b225-079659d54e9f,Namespace:calico-system,Attempt:1,} returns sandbox id \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\"" Jul 7 01:44:43.646904 containerd[1463]: time="2025-07-07T01:44:43.646803476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 01:44:43.769608 containerd[1463]: time="2025-07-07T01:44:43.769554427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-q2hdx,Uid:b7b0113c-461b-4b97-957e-70b2d17f2275,Namespace:calico-system,Attempt:1,} returns sandbox id \"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2\"" Jul 7 01:44:43.808914 containerd[1463]: time="2025-07-07T01:44:43.808859114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lcqqq,Uid:9914e783-0422-4ffd-98e7-e3799124405f,Namespace:calico-system,Attempt:1,} returns sandbox id \"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7\"" Jul 7 01:44:43.826238 containerd[1463]: time="2025-07-07T01:44:43.825707888Z" level=info msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" Jul 7 01:44:43.826238 containerd[1463]: time="2025-07-07T01:44:43.825719420Z" level=info msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" Jul 7 01:44:43.983605 systemd-networkd[1378]: calied971370f30: Link UP Jul 7 01:44:43.985459 systemd-networkd[1378]: calied971370f30: Gained carrier Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.589 [INFO][4186] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.618 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0 calico-apiserver-6fc4bb86bc- calico-apiserver 687776ff-b90b-4baa-af46-4023f495fb97 924 0 2025-07-07 01:44:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc4bb86bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal calico-apiserver-6fc4bb86bc-jljn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied971370f30 [] [] }} ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.622 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.819 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" HandleID="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.822 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" HandleID="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"calico-apiserver-6fc4bb86bc-jljn9", "timestamp":"2025-07-07 01:44:43.816163759 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.822 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.822 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.822 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.851 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.887 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.907 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.911 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.922 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.923 [INFO][4254] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.927 [INFO][4254] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90 Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.944 [INFO][4254] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.961 [INFO][4254] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.4/26] block=192.168.2.0/26 handle="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.962 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.4/26] handle="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.962 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:44.019672 containerd[1463]: 2025-07-07 01:44:43.963 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.4/26] IPv6=[] ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" HandleID="k8s-pod-network.284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:43.975 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"687776ff-b90b-4baa-af46-4023f495fb97", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"calico-apiserver-6fc4bb86bc-jljn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied971370f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:43.977 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.4/32] ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:43.977 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied971370f30 ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:43.986 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:43.987 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"687776ff-b90b-4baa-af46-4023f495fb97", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90", Pod:"calico-apiserver-6fc4bb86bc-jljn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied971370f30", MAC:"ca:e7:10:c9:56:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:44.026927 containerd[1463]: 2025-07-07 01:44:44.007 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-jljn9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:44.176790 containerd[1463]: time="2025-07-07T01:44:44.169806999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:44.176790 containerd[1463]: time="2025-07-07T01:44:44.169900586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:44.176790 containerd[1463]: time="2025-07-07T01:44:44.169924230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:44.176790 containerd[1463]: time="2025-07-07T01:44:44.170029208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:44.301780 systemd-networkd[1378]: calib09dfc335fb: Gained IPv6LL Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.098 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.098 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" iface="eth0" netns="/var/run/netns/cni-5da72778-5d60-69cd-1e4e-a76f25d33be1" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.101 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" iface="eth0" netns="/var/run/netns/cni-5da72778-5d60-69cd-1e4e-a76f25d33be1" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.108 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" iface="eth0" netns="/var/run/netns/cni-5da72778-5d60-69cd-1e4e-a76f25d33be1" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.108 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.108 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.266 [INFO][4354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.272 [INFO][4354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.272 [INFO][4354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.304 [WARNING][4354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.305 [INFO][4354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.311 [INFO][4354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:44.325493 containerd[1463]: 2025-07-07 01:44:44.321 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:44.330322 containerd[1463]: time="2025-07-07T01:44:44.327346191Z" level=info msg="TearDown network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" successfully" Jul 7 01:44:44.330322 containerd[1463]: time="2025-07-07T01:44:44.327388491Z" level=info msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" returns successfully" Jul 7 01:44:44.331241 containerd[1463]: time="2025-07-07T01:44:44.330705690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns8xt,Uid:67505d5f-ec1f-4b25-9868-2da79cc2efec,Namespace:kube-system,Attempt:1,}" Jul 7 01:44:44.332645 systemd[1]: run-netns-cni\x2d5da72778\x2d5d60\x2d69cd\x2d1e4e\x2da76f25d33be1.mount: Deactivated successfully. Jul 7 01:44:44.341636 systemd[1]: Started cri-containerd-284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90.scope - libcontainer container 284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90. Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.105 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.106 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" iface="eth0" netns="/var/run/netns/cni-63f0f191-d1f5-0613-9398-c057c63b5b3a" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.107 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" iface="eth0" netns="/var/run/netns/cni-63f0f191-d1f5-0613-9398-c057c63b5b3a" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.107 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" iface="eth0" netns="/var/run/netns/cni-63f0f191-d1f5-0613-9398-c057c63b5b3a" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.107 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.108 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.316 [INFO][4355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.323 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.323 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.343 [WARNING][4355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.343 [INFO][4355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.347 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:44.356513 containerd[1463]: 2025-07-07 01:44:44.353 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:44.358609 containerd[1463]: time="2025-07-07T01:44:44.357817371Z" level=info msg="TearDown network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" successfully" Jul 7 01:44:44.358609 containerd[1463]: time="2025-07-07T01:44:44.357872075Z" level=info msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" returns successfully" Jul 7 01:44:44.361136 containerd[1463]: time="2025-07-07T01:44:44.360773938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-5q8k9,Uid:32440117-c574-4788-812a-5d3b5496a9ed,Namespace:calico-apiserver,Attempt:1,}" Jul 7 01:44:44.361962 systemd[1]: run-netns-cni\x2d63f0f191\x2dd1f5\x2d0613\x2d9398\x2dc057c63b5b3a.mount: Deactivated successfully. Jul 7 01:44:44.475951 containerd[1463]: time="2025-07-07T01:44:44.475848819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-jljn9,Uid:687776ff-b90b-4baa-af46-4023f495fb97,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90\"" Jul 7 01:44:44.790244 systemd-networkd[1378]: cali73b04fe4245: Link UP Jul 7 01:44:44.794928 systemd-networkd[1378]: cali73b04fe4245: Gained carrier Jul 7 01:44:44.828311 containerd[1463]: time="2025-07-07T01:44:44.821445883Z" level=info msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.449 [INFO][4401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.481 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0 coredns-668d6bf9bc- kube-system 67505d5f-ec1f-4b25-9868-2da79cc2efec 944 0 2025-07-07 01:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal coredns-668d6bf9bc-ns8xt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73b04fe4245 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.481 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.593 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" HandleID="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.593 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" HandleID="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001cc420), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"coredns-668d6bf9bc-ns8xt", "timestamp":"2025-07-07 01:44:44.593098959 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.593 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.593 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.593 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.621 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.646 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.668 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.676 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.694 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.694 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.697 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73 Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.733 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.5/26] block=192.168.2.0/26 handle="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.5/26] handle="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:44.835173 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.5/26] IPv6=[] ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" HandleID="k8s-pod-network.14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.777 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67505d5f-ec1f-4b25-9868-2da79cc2efec", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-ns8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73b04fe4245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.777 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.5/32] ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.777 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73b04fe4245 ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.795 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.797 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67505d5f-ec1f-4b25-9868-2da79cc2efec", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73", Pod:"coredns-668d6bf9bc-ns8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73b04fe4245", MAC:"de:68:0c:7b:61:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:44.839848 containerd[1463]: 2025-07-07 01:44:44.832 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ns8xt" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:45.006581 systemd-networkd[1378]: cali4e6968ca8aa: Gained IPv6LL Jul 7 01:44:45.038557 kernel: bpftool[4479]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 01:44:45.133439 systemd-networkd[1378]: cali3190f25df87: Gained IPv6LL Jul 7 01:44:45.218461 systemd-networkd[1378]: cali0ee25119cd9: Link UP Jul 7 01:44:45.220650 systemd-networkd[1378]: cali0ee25119cd9: Gained carrier Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.066 [INFO][4464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.066 [INFO][4464] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" iface="eth0" netns="/var/run/netns/cni-9665bbc0-52f8-8fa8-6390-4b0ed4331316" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.067 [INFO][4464] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" iface="eth0" netns="/var/run/netns/cni-9665bbc0-52f8-8fa8-6390-4b0ed4331316" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.067 [INFO][4464] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" iface="eth0" netns="/var/run/netns/cni-9665bbc0-52f8-8fa8-6390-4b0ed4331316" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.067 [INFO][4464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.067 [INFO][4464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.128 [INFO][4481] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.130 [INFO][4481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.204 [INFO][4481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.235 [WARNING][4481] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.236 [INFO][4481] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.280 [INFO][4481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:45.289393 containerd[1463]: 2025-07-07 01:44:45.285 [INFO][4464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.529 [INFO][4411] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.570 [INFO][4411] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0 calico-apiserver-6fc4bb86bc- calico-apiserver 32440117-c574-4788-812a-5d3b5496a9ed 945 0 2025-07-07 01:44:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fc4bb86bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal calico-apiserver-6fc4bb86bc-5q8k9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ee25119cd9 [] [] }} ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.570 [INFO][4411] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.678 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" HandleID="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.681 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" HandleID="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e500), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"calico-apiserver-6fc4bb86bc-5q8k9", "timestamp":"2025-07-07 01:44:44.678352478 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.682 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.772 [INFO][4440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.807 [INFO][4440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.969 [INFO][4440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:44.992 [INFO][4440] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.059 [INFO][4440] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.139 [INFO][4440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.140 [INFO][4440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.143 [INFO][4440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986 Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.163 [INFO][4440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.202 [INFO][4440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.6/26] block=192.168.2.0/26 handle="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.203 [INFO][4440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.6/26] handle="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.203 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:45.291754 containerd[1463]: 2025-07-07 01:44:45.204 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.6/26] IPv6=[] ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" HandleID="k8s-pod-network.3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.210 [INFO][4411] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32440117-c574-4788-812a-5d3b5496a9ed", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"calico-apiserver-6fc4bb86bc-5q8k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ee25119cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.211 [INFO][4411] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.6/32] ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.211 [INFO][4411] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ee25119cd9 ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.219 [INFO][4411] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.219 [INFO][4411] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32440117-c574-4788-812a-5d3b5496a9ed", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986", Pod:"calico-apiserver-6fc4bb86bc-5q8k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ee25119cd9", MAC:"0e:4c:34:08:e7:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:45.294935 containerd[1463]: 2025-07-07 01:44:45.283 [INFO][4411] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986" Namespace="calico-apiserver" Pod="calico-apiserver-6fc4bb86bc-5q8k9" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:45.296619 systemd[1]: run-netns-cni\x2d9665bbc0\x2d52f8\x2d8fa8\x2d6390\x2d4b0ed4331316.mount: Deactivated successfully. Jul 7 01:44:45.304261 containerd[1463]: time="2025-07-07T01:44:45.297782320Z" level=info msg="TearDown network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" successfully" Jul 7 01:44:45.304261 containerd[1463]: time="2025-07-07T01:44:45.297830542Z" level=info msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" returns successfully" Jul 7 01:44:45.304261 containerd[1463]: time="2025-07-07T01:44:45.299276834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7th4f,Uid:f8bea45c-1889-4c85-82bc-48df27c16ca2,Namespace:kube-system,Attempt:1,}" Jul 7 01:44:45.484209 containerd[1463]: time="2025-07-07T01:44:45.482165922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:45.484209 containerd[1463]: time="2025-07-07T01:44:45.482244270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:45.484209 containerd[1463]: time="2025-07-07T01:44:45.482269889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:45.484209 containerd[1463]: time="2025-07-07T01:44:45.482412598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:45.530682 containerd[1463]: time="2025-07-07T01:44:45.529809584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:45.530682 containerd[1463]: time="2025-07-07T01:44:45.530531498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:45.530682 containerd[1463]: time="2025-07-07T01:44:45.530592514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:45.531360 containerd[1463]: time="2025-07-07T01:44:45.530951392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:45.577564 systemd[1]: Started cri-containerd-14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73.scope - libcontainer container 14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73. Jul 7 01:44:45.589617 systemd[1]: Started cri-containerd-3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986.scope - libcontainer container 3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986. Jul 7 01:44:45.709766 systemd-networkd[1378]: calied971370f30: Gained IPv6LL Jul 7 01:44:45.759005 containerd[1463]: time="2025-07-07T01:44:45.758922872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ns8xt,Uid:67505d5f-ec1f-4b25-9868-2da79cc2efec,Namespace:kube-system,Attempt:1,} returns sandbox id \"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73\"" Jul 7 01:44:45.781479 containerd[1463]: time="2025-07-07T01:44:45.781420617Z" level=info msg="CreateContainer within sandbox \"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:44:45.818264 containerd[1463]: time="2025-07-07T01:44:45.818045060Z" level=info msg="CreateContainer within sandbox \"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dae67f784f9b88c3878c2328c87078c1dd4b1c2254141b10b4343af6a3fd7d7e\"" Jul 7 01:44:45.825319 containerd[1463]: time="2025-07-07T01:44:45.824269049Z" level=info msg="StartContainer for \"dae67f784f9b88c3878c2328c87078c1dd4b1c2254141b10b4343af6a3fd7d7e\"" Jul 7 01:44:45.915545 systemd[1]: Started cri-containerd-dae67f784f9b88c3878c2328c87078c1dd4b1c2254141b10b4343af6a3fd7d7e.scope - libcontainer container dae67f784f9b88c3878c2328c87078c1dd4b1c2254141b10b4343af6a3fd7d7e. Jul 7 01:44:45.963936 systemd-networkd[1378]: vxlan.calico: Link UP Jul 7 01:44:45.963959 systemd-networkd[1378]: vxlan.calico: Gained carrier Jul 7 01:44:46.076355 systemd-networkd[1378]: cali784ebcbf4bc: Link UP Jul 7 01:44:46.077698 systemd-networkd[1378]: cali784ebcbf4bc: Gained carrier Jul 7 01:44:46.087326 containerd[1463]: time="2025-07-07T01:44:46.084692953Z" level=info msg="StartContainer for \"dae67f784f9b88c3878c2328c87078c1dd4b1c2254141b10b4343af6a3fd7d7e\" returns successfully" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.680 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0 coredns-668d6bf9bc- kube-system f8bea45c-1889-4c85-82bc-48df27c16ca2 953 0 2025-07-07 01:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal coredns-668d6bf9bc-7th4f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali784ebcbf4bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.680 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.858 [INFO][4608] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" HandleID="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.860 [INFO][4608] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" HandleID="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"coredns-668d6bf9bc-7th4f", "timestamp":"2025-07-07 01:44:45.858113891 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.860 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.861 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.862 [INFO][4608] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.888 [INFO][4608] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.912 [INFO][4608] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.930 [INFO][4608] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.936 [INFO][4608] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.948 [INFO][4608] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.948 [INFO][4608] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:45.965 [INFO][4608] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783 Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:46.001 [INFO][4608] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:46.039 [INFO][4608] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.7/26] block=192.168.2.0/26 handle="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:46.039 [INFO][4608] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.7/26] handle="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:46.040 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:46.136446 containerd[1463]: 2025-07-07 01:44:46.041 [INFO][4608] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.7/26] IPv6=[] ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" HandleID="k8s-pod-network.3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.058 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f8bea45c-1889-4c85-82bc-48df27c16ca2", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-7th4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali784ebcbf4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.060 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.7/32] ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.063 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali784ebcbf4bc ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.073 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.080 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f8bea45c-1889-4c85-82bc-48df27c16ca2", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783", Pod:"coredns-668d6bf9bc-7th4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali784ebcbf4bc", MAC:"4a:4f:86:89:46:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:46.137363 containerd[1463]: 2025-07-07 01:44:46.127 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783" Namespace="kube-system" Pod="coredns-668d6bf9bc-7th4f" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:46.148369 containerd[1463]: time="2025-07-07T01:44:46.147688416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fc4bb86bc-5q8k9,Uid:32440117-c574-4788-812a-5d3b5496a9ed,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986\"" Jul 7 01:44:46.216161 containerd[1463]: time="2025-07-07T01:44:46.215765369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:46.216161 containerd[1463]: time="2025-07-07T01:44:46.215869264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:46.216161 containerd[1463]: time="2025-07-07T01:44:46.215906194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:46.216161 containerd[1463]: time="2025-07-07T01:44:46.216021322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:46.274193 systemd[1]: Started cri-containerd-3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783.scope - libcontainer container 3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783. Jul 7 01:44:46.364839 kubelet[2614]: I0707 01:44:46.364620 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ns8xt" podStartSLOduration=47.364537848 podStartE2EDuration="47.364537848s" podCreationTimestamp="2025-07-07 01:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:44:46.336245927 +0000 UTC m=+52.673237181" watchObservedRunningTime="2025-07-07 01:44:46.364537848 +0000 UTC m=+52.701529092" Jul 7 01:44:46.388163 containerd[1463]: time="2025-07-07T01:44:46.388113500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7th4f,Uid:f8bea45c-1889-4c85-82bc-48df27c16ca2,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783\"" Jul 7 01:44:46.396219 containerd[1463]: time="2025-07-07T01:44:46.395847158Z" level=info msg="CreateContainer within sandbox \"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:44:46.461403 containerd[1463]: time="2025-07-07T01:44:46.460872747Z" level=info msg="CreateContainer within sandbox \"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edb2dd7601c974e470cf5afcfa711ad1827b2dda24d6c890286fc467d9d7487a\"" Jul 7 01:44:46.465690 containerd[1463]: time="2025-07-07T01:44:46.464587785Z" level=info msg="StartContainer for \"edb2dd7601c974e470cf5afcfa711ad1827b2dda24d6c890286fc467d9d7487a\"" Jul 7 01:44:46.478155 systemd-networkd[1378]: cali73b04fe4245: Gained IPv6LL Jul 7 01:44:46.522473 systemd[1]: Started cri-containerd-edb2dd7601c974e470cf5afcfa711ad1827b2dda24d6c890286fc467d9d7487a.scope - libcontainer container edb2dd7601c974e470cf5afcfa711ad1827b2dda24d6c890286fc467d9d7487a. Jul 7 01:44:46.600134 containerd[1463]: time="2025-07-07T01:44:46.599988225Z" level=info msg="StartContainer for \"edb2dd7601c974e470cf5afcfa711ad1827b2dda24d6c890286fc467d9d7487a\" returns successfully" Jul 7 01:44:46.698786 containerd[1463]: time="2025-07-07T01:44:46.698610876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:46.700498 containerd[1463]: time="2025-07-07T01:44:46.700459859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 01:44:46.701847 containerd[1463]: time="2025-07-07T01:44:46.701773840Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:46.706229 containerd[1463]: time="2025-07-07T01:44:46.705902349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:46.706950 containerd[1463]: time="2025-07-07T01:44:46.706905523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.059157564s" Jul 7 01:44:46.707044 containerd[1463]: time="2025-07-07T01:44:46.706953064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 01:44:46.710701 containerd[1463]: time="2025-07-07T01:44:46.710654295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 01:44:46.714225 containerd[1463]: time="2025-07-07T01:44:46.714163003Z" level=info msg="CreateContainer within sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 01:44:46.742752 containerd[1463]: time="2025-07-07T01:44:46.742312003Z" level=info msg="CreateContainer within sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\"" Jul 7 01:44:46.746315 containerd[1463]: time="2025-07-07T01:44:46.744839488Z" level=info msg="StartContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\"" Jul 7 01:44:46.797511 systemd-networkd[1378]: cali0ee25119cd9: Gained IPv6LL Jul 7 01:44:46.810831 systemd[1]: Started cri-containerd-90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e.scope - libcontainer container 90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e. Jul 7 01:44:46.915486 containerd[1463]: time="2025-07-07T01:44:46.915396777Z" level=info msg="StartContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" returns successfully" Jul 7 01:44:47.335166 kubelet[2614]: I0707 01:44:47.334238 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7th4f" podStartSLOduration=48.334211301 podStartE2EDuration="48.334211301s" podCreationTimestamp="2025-07-07 01:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:44:47.33300316 +0000 UTC m=+53.669994404" watchObservedRunningTime="2025-07-07 01:44:47.334211301 +0000 UTC m=+53.671202535" Jul 7 01:44:47.565537 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL Jul 7 01:44:48.014119 systemd-networkd[1378]: cali784ebcbf4bc: Gained IPv6LL Jul 7 01:44:52.245537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123008242.mount: Deactivated successfully. Jul 7 01:44:53.078331 containerd[1463]: time="2025-07-07T01:44:53.078251053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:53.081547 containerd[1463]: time="2025-07-07T01:44:53.081451912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 01:44:53.082921 containerd[1463]: time="2025-07-07T01:44:53.082893401Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:53.086717 containerd[1463]: time="2025-07-07T01:44:53.086684904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:53.088592 containerd[1463]: time="2025-07-07T01:44:53.087884117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.376072426s" Jul 7 01:44:53.088592 containerd[1463]: time="2025-07-07T01:44:53.087951926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 01:44:53.091491 containerd[1463]: time="2025-07-07T01:44:53.090503299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 01:44:53.098922 containerd[1463]: time="2025-07-07T01:44:53.098804521Z" level=info msg="CreateContainer within sandbox \"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 01:44:53.141107 containerd[1463]: time="2025-07-07T01:44:53.141053904Z" level=info msg="CreateContainer within sandbox \"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500\"" Jul 7 01:44:53.142267 containerd[1463]: time="2025-07-07T01:44:53.142236566Z" level=info msg="StartContainer for \"67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500\"" Jul 7 01:44:53.221521 systemd[1]: Started cri-containerd-67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500.scope - libcontainer container 67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500. Jul 7 01:44:53.292316 containerd[1463]: time="2025-07-07T01:44:53.292164971Z" level=info msg="StartContainer for \"67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500\" returns successfully" Jul 7 01:44:53.418529 kubelet[2614]: I0707 01:44:53.418096 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-q2hdx" podStartSLOduration=32.106084858 podStartE2EDuration="41.418073446s" podCreationTimestamp="2025-07-07 01:44:12 +0000 UTC" firstStartedPulling="2025-07-07 01:44:43.777961898 +0000 UTC m=+50.114953132" lastFinishedPulling="2025-07-07 01:44:53.089950486 +0000 UTC m=+59.426941720" observedRunningTime="2025-07-07 01:44:53.417843141 +0000 UTC m=+59.754834406" watchObservedRunningTime="2025-07-07 01:44:53.418073446 +0000 UTC m=+59.755064691" Jul 7 01:44:53.803382 containerd[1463]: time="2025-07-07T01:44:53.803043542Z" level=info msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.909 [WARNING][4936] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67505d5f-ec1f-4b25-9868-2da79cc2efec", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73", Pod:"coredns-668d6bf9bc-ns8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73b04fe4245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.909 [INFO][4936] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.909 [INFO][4936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" iface="eth0" netns="" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.909 [INFO][4936] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.909 [INFO][4936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.964 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.964 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.965 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.977 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.977 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.980 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:53.983466 containerd[1463]: 2025-07-07 01:44:53.981 [INFO][4936] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:53.984050 containerd[1463]: time="2025-07-07T01:44:53.983547334Z" level=info msg="TearDown network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" successfully" Jul 7 01:44:53.984050 containerd[1463]: time="2025-07-07T01:44:53.983602228Z" level=info msg="StopPodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" returns successfully" Jul 7 01:44:53.986231 containerd[1463]: time="2025-07-07T01:44:53.985821014Z" level=info msg="RemovePodSandbox for \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" Jul 7 01:44:53.986231 containerd[1463]: time="2025-07-07T01:44:53.985871749Z" level=info msg="Forcibly stopping sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\"" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.058 [WARNING][4959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"67505d5f-ec1f-4b25-9868-2da79cc2efec", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"14b23fcdcf11dd015b4f0cd78e8f9fee5ef8df69eac1fae126965d2f05199f73", Pod:"coredns-668d6bf9bc-ns8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73b04fe4245", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.059 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.059 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" iface="eth0" netns="" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.059 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.059 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.090 [INFO][4966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.090 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.090 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.116 [WARNING][4966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.116 [INFO][4966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" HandleID="k8s-pod-network.b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--ns8xt-eth0" Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.120 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.127774 containerd[1463]: 2025-07-07 01:44:54.124 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460" Jul 7 01:44:54.129477 containerd[1463]: time="2025-07-07T01:44:54.129406676Z" level=info msg="TearDown network for sandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" successfully" Jul 7 01:44:54.146188 containerd[1463]: time="2025-07-07T01:44:54.146096141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:54.146268 containerd[1463]: time="2025-07-07T01:44:54.146195818Z" level=info msg="RemovePodSandbox \"b62730e72f84498be30c27b4f6db88e9eff1347985ef2c11b89422b371cff460\" returns successfully" Jul 7 01:44:54.147089 containerd[1463]: time="2025-07-07T01:44:54.147050821Z" level=info msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.191 [WARNING][4980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"687776ff-b90b-4baa-af46-4023f495fb97", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90", Pod:"calico-apiserver-6fc4bb86bc-jljn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied971370f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.192 [INFO][4980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.192 [INFO][4980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" iface="eth0" netns="" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.192 [INFO][4980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.192 [INFO][4980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.222 [INFO][4987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.222 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.222 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.231 [WARNING][4987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.231 [INFO][4987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.233 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.236575 containerd[1463]: 2025-07-07 01:44:54.234 [INFO][4980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.236575 containerd[1463]: time="2025-07-07T01:44:54.236381650Z" level=info msg="TearDown network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" successfully" Jul 7 01:44:54.236575 containerd[1463]: time="2025-07-07T01:44:54.236415433Z" level=info msg="StopPodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" returns successfully" Jul 7 01:44:54.237175 containerd[1463]: time="2025-07-07T01:44:54.237052305Z" level=info msg="RemovePodSandbox for \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" Jul 7 01:44:54.237175 containerd[1463]: time="2025-07-07T01:44:54.237092210Z" level=info msg="Forcibly stopping sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\"" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.278 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"687776ff-b90b-4baa-af46-4023f495fb97", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90", Pod:"calico-apiserver-6fc4bb86bc-jljn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied971370f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.279 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.279 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" iface="eth0" netns="" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.279 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.279 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.332 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.332 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.334 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.350 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.350 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" HandleID="k8s-pod-network.9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--jljn9-eth0" Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.357 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.363610 containerd[1463]: 2025-07-07 01:44:54.359 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16" Jul 7 01:44:54.363610 containerd[1463]: time="2025-07-07T01:44:54.363184989Z" level=info msg="TearDown network for sandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" successfully" Jul 7 01:44:54.368537 containerd[1463]: time="2025-07-07T01:44:54.368470219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:54.368616 containerd[1463]: time="2025-07-07T01:44:54.368581810Z" level=info msg="RemovePodSandbox \"9f999f295c985d42b1204dd59f3f573b83f56dc1c0290f18cfac3c73ab16cf16\" returns successfully" Jul 7 01:44:54.369603 containerd[1463]: time="2025-07-07T01:44:54.369177994Z" level=info msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.424 [WARNING][5027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9914e783-0422-4ffd-98e7-e3799124405f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7", Pod:"csi-node-driver-lcqqq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e6968ca8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.426 [INFO][5027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.426 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" iface="eth0" netns="" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.426 [INFO][5027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.426 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.471 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.471 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.471 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.481 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.481 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.483 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.486982 containerd[1463]: 2025-07-07 01:44:54.485 [INFO][5027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.487550 containerd[1463]: time="2025-07-07T01:44:54.487334722Z" level=info msg="TearDown network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" successfully" Jul 7 01:44:54.487550 containerd[1463]: time="2025-07-07T01:44:54.487371972Z" level=info msg="StopPodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" returns successfully" Jul 7 01:44:54.488496 containerd[1463]: time="2025-07-07T01:44:54.488150030Z" level=info msg="RemovePodSandbox for \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" Jul 7 01:44:54.488496 containerd[1463]: time="2025-07-07T01:44:54.488194584Z" level=info msg="Forcibly stopping sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\"" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.568 [WARNING][5068] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9914e783-0422-4ffd-98e7-e3799124405f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7", Pod:"csi-node-driver-lcqqq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e6968ca8aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.569 [INFO][5068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.569 [INFO][5068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" iface="eth0" netns="" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.569 [INFO][5068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.569 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.603 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.604 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.604 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.612 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.612 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" HandleID="k8s-pod-network.185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-csi--node--driver--lcqqq-eth0" Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.615 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.617991 containerd[1463]: 2025-07-07 01:44:54.616 [INFO][5068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e" Jul 7 01:44:54.618615 containerd[1463]: time="2025-07-07T01:44:54.618013021Z" level=info msg="TearDown network for sandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" successfully" Jul 7 01:44:54.625020 containerd[1463]: time="2025-07-07T01:44:54.624958192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:54.625088 containerd[1463]: time="2025-07-07T01:44:54.625024267Z" level=info msg="RemovePodSandbox \"185f1417ef385b86166c78c8ea08fb612903d518bc490562fa5952f1cdc02b4e\" returns successfully" Jul 7 01:44:54.625688 containerd[1463]: time="2025-07-07T01:44:54.625651350Z" level=info msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.672 [WARNING][5091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32440117-c574-4788-812a-5d3b5496a9ed", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986", Pod:"calico-apiserver-6fc4bb86bc-5q8k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ee25119cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.672 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.672 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" iface="eth0" netns="" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.672 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.672 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.707 [INFO][5098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.708 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.708 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.722 [WARNING][5098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.722 [INFO][5098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.726 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.730777 containerd[1463]: 2025-07-07 01:44:54.728 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.730777 containerd[1463]: time="2025-07-07T01:44:54.730643624Z" level=info msg="TearDown network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" successfully" Jul 7 01:44:54.730777 containerd[1463]: time="2025-07-07T01:44:54.730740968Z" level=info msg="StopPodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" returns successfully" Jul 7 01:44:54.732792 containerd[1463]: time="2025-07-07T01:44:54.732635872Z" level=info msg="RemovePodSandbox for \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" Jul 7 01:44:54.732792 containerd[1463]: time="2025-07-07T01:44:54.732694002Z" level=info msg="Forcibly stopping sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\"" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.778 [WARNING][5112] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0", GenerateName:"calico-apiserver-6fc4bb86bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"32440117-c574-4788-812a-5d3b5496a9ed", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fc4bb86bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986", Pod:"calico-apiserver-6fc4bb86bc-5q8k9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ee25119cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.779 [INFO][5112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.779 [INFO][5112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" iface="eth0" netns="" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.779 [INFO][5112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.779 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.804 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.804 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.804 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.813 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.813 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" HandleID="k8s-pod-network.81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--apiserver--6fc4bb86bc--5q8k9-eth0" Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.815 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.819385 containerd[1463]: 2025-07-07 01:44:54.817 [INFO][5112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290" Jul 7 01:44:54.820431 containerd[1463]: time="2025-07-07T01:44:54.819445836Z" level=info msg="TearDown network for sandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" successfully" Jul 7 01:44:54.824169 containerd[1463]: time="2025-07-07T01:44:54.824125113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:54.824248 containerd[1463]: time="2025-07-07T01:44:54.824228217Z" level=info msg="RemovePodSandbox \"81f17b5d81b4a1130bc022d3a769ef44ed36e160e7818979f54bc5c4bdea3290\" returns successfully" Jul 7 01:44:54.825131 containerd[1463]: time="2025-07-07T01:44:54.825097858Z" level=info msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.869 [WARNING][5133] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f8bea45c-1889-4c85-82bc-48df27c16ca2", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783", Pod:"coredns-668d6bf9bc-7th4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali784ebcbf4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.870 [INFO][5133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.870 [INFO][5133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" iface="eth0" netns="" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.870 [INFO][5133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.870 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.897 [INFO][5141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.899 [INFO][5141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.900 [INFO][5141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.921 [WARNING][5141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.921 [INFO][5141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.925 [INFO][5141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:54.928898 containerd[1463]: 2025-07-07 01:44:54.926 [INFO][5133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:54.928898 containerd[1463]: time="2025-07-07T01:44:54.928472800Z" level=info msg="TearDown network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" successfully" Jul 7 01:44:54.928898 containerd[1463]: time="2025-07-07T01:44:54.928506003Z" level=info msg="StopPodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" returns successfully" Jul 7 01:44:54.931299 containerd[1463]: time="2025-07-07T01:44:54.930450361Z" level=info msg="RemovePodSandbox for \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" Jul 7 01:44:54.931299 containerd[1463]: time="2025-07-07T01:44:54.930486488Z" level=info msg="Forcibly stopping sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\"" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.011 [WARNING][5155] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f8bea45c-1889-4c85-82bc-48df27c16ca2", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"3a7bb66450a146697fe074f962b09c2a1ca658b038811f70f46753c1bdfd1783", Pod:"coredns-668d6bf9bc-7th4f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali784ebcbf4bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.012 [INFO][5155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.012 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" iface="eth0" netns="" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.012 [INFO][5155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.012 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.049 [INFO][5163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.051 [INFO][5163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.051 [INFO][5163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.064 [WARNING][5163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.064 [INFO][5163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" HandleID="k8s-pod-network.1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-coredns--668d6bf9bc--7th4f-eth0" Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.067 [INFO][5163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:55.070512 containerd[1463]: 2025-07-07 01:44:55.069 [INFO][5155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb" Jul 7 01:44:55.071261 containerd[1463]: time="2025-07-07T01:44:55.071116002Z" level=info msg="TearDown network for sandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" successfully" Jul 7 01:44:55.076156 containerd[1463]: time="2025-07-07T01:44:55.075756424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:55.076156 containerd[1463]: time="2025-07-07T01:44:55.075823320Z" level=info msg="RemovePodSandbox \"1c4219ad99a2ba5eceeee34d077fe54b97b4dece71658242db2a45969d7609bb\" returns successfully" Jul 7 01:44:55.076587 containerd[1463]: time="2025-07-07T01:44:55.076542818Z" level=info msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.153 [WARNING][5177] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0", GenerateName:"whisker-7f84455d77-", Namespace:"calico-system", SelfLink:"", UID:"6e744049-be52-495d-b225-079659d54e9f", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f84455d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b", Pod:"whisker-7f84455d77-m9nc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3190f25df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.155 [INFO][5177] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.155 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" iface="eth0" netns="" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.155 [INFO][5177] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.156 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.205 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.205 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.205 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.215 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.215 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.217 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:55.221463 containerd[1463]: 2025-07-07 01:44:55.220 [INFO][5177] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.222895 containerd[1463]: time="2025-07-07T01:44:55.222434827Z" level=info msg="TearDown network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" successfully" Jul 7 01:44:55.222895 containerd[1463]: time="2025-07-07T01:44:55.222471307Z" level=info msg="StopPodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" returns successfully" Jul 7 01:44:55.223567 containerd[1463]: time="2025-07-07T01:44:55.223544270Z" level=info msg="RemovePodSandbox for \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" Jul 7 01:44:55.223679 containerd[1463]: time="2025-07-07T01:44:55.223644699Z" level=info msg="Forcibly stopping sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\"" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.275 [WARNING][5202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0", GenerateName:"whisker-7f84455d77-", Namespace:"calico-system", SelfLink:"", UID:"6e744049-be52-495d-b225-079659d54e9f", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f84455d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b", Pod:"whisker-7f84455d77-m9nc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3190f25df87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.276 [INFO][5202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.276 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" iface="eth0" netns="" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.276 [INFO][5202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.276 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.319 [INFO][5210] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.319 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.319 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.336 [WARNING][5210] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.337 [INFO][5210] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" HandleID="k8s-pod-network.17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.339 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:55.343976 containerd[1463]: 2025-07-07 01:44:55.341 [INFO][5202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1" Jul 7 01:44:55.343976 containerd[1463]: time="2025-07-07T01:44:55.343930121Z" level=info msg="TearDown network for sandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" successfully" Jul 7 01:44:55.351450 containerd[1463]: time="2025-07-07T01:44:55.351222426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:55.351450 containerd[1463]: time="2025-07-07T01:44:55.351336341Z" level=info msg="RemovePodSandbox \"17fa70b7d3cd1b7c6b83e3f24078501046ae68781dfab5affded660f788f40a1\" returns successfully" Jul 7 01:44:55.352402 containerd[1463]: time="2025-07-07T01:44:55.352365151Z" level=info msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" Jul 7 01:44:55.508330 containerd[1463]: time="2025-07-07T01:44:55.507222642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:55.510583 containerd[1463]: time="2025-07-07T01:44:55.510176343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 01:44:55.511316 containerd[1463]: time="2025-07-07T01:44:55.511046865Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.435 [WARNING][5224] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b7b0113c-461b-4b97-957e-70b2d17f2275", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2", Pod:"goldmane-768f4c5c69-q2hdx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib09dfc335fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.435 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.435 [INFO][5224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" iface="eth0" netns="" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.435 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.435 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.493 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.495 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.495 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.509 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.510 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.515 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:55.520394 containerd[1463]: 2025-07-07 01:44:55.517 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.521459 containerd[1463]: time="2025-07-07T01:44:55.520983227Z" level=info msg="TearDown network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" successfully" Jul 7 01:44:55.521459 containerd[1463]: time="2025-07-07T01:44:55.521019255Z" level=info msg="StopPodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" returns successfully" Jul 7 01:44:55.522111 containerd[1463]: time="2025-07-07T01:44:55.522000125Z" level=info msg="RemovePodSandbox for \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" Jul 7 01:44:55.522111 containerd[1463]: time="2025-07-07T01:44:55.522078032Z" level=info msg="Forcibly stopping sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\"" Jul 7 01:44:55.524213 containerd[1463]: time="2025-07-07T01:44:55.522837264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:55.526318 containerd[1463]: time="2025-07-07T01:44:55.525857731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.435301482s" Jul 7 01:44:55.526419 containerd[1463]: time="2025-07-07T01:44:55.526399783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 01:44:55.532106 containerd[1463]: time="2025-07-07T01:44:55.532016978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 01:44:55.541008 containerd[1463]: time="2025-07-07T01:44:55.540947724Z" level=info msg="CreateContainer within sandbox \"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 01:44:55.579728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280956074.mount: Deactivated successfully. Jul 7 01:44:55.590335 containerd[1463]: time="2025-07-07T01:44:55.590246671Z" level=info msg="CreateContainer within sandbox \"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7\"" Jul 7 01:44:55.592683 containerd[1463]: time="2025-07-07T01:44:55.592263205Z" level=info msg="StartContainer for \"124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7\"" Jul 7 01:44:55.676606 systemd[1]: Started cri-containerd-124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7.scope - libcontainer container 124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7. Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.625 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b7b0113c-461b-4b97-957e-70b2d17f2275", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"ac9c6000a283a0e217a4ae5008088c868f4c996888ec9e9bffb23fade716cba2", Pod:"goldmane-768f4c5c69-q2hdx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib09dfc335fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.627 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.627 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" iface="eth0" netns="" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.627 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.627 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.686 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.686 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.686 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.702 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.703 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" HandleID="k8s-pod-network.66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-goldmane--768f4c5c69--q2hdx-eth0" Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.707 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:55.712407 containerd[1463]: 2025-07-07 01:44:55.709 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d" Jul 7 01:44:55.713275 containerd[1463]: time="2025-07-07T01:44:55.712462954Z" level=info msg="TearDown network for sandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" successfully" Jul 7 01:44:55.720724 containerd[1463]: time="2025-07-07T01:44:55.720627965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:44:55.720970 containerd[1463]: time="2025-07-07T01:44:55.720921749Z" level=info msg="RemovePodSandbox \"66ba24125d579f80f91c37e687568c9ea2f751b48bc996ba2291ca27bb0cc42d\" returns successfully" Jul 7 01:44:55.738078 containerd[1463]: time="2025-07-07T01:44:55.738002958Z" level=info msg="StartContainer for \"124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7\" returns successfully" Jul 7 01:44:55.822688 containerd[1463]: time="2025-07-07T01:44:55.822408553Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.981 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.981 [INFO][5320] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" iface="eth0" netns="/var/run/netns/cni-552479af-3a95-dbd9-a6ff-fea1aca3d8df" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.982 [INFO][5320] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" iface="eth0" netns="/var/run/netns/cni-552479af-3a95-dbd9-a6ff-fea1aca3d8df" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.984 [INFO][5320] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" iface="eth0" netns="/var/run/netns/cni-552479af-3a95-dbd9-a6ff-fea1aca3d8df" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.984 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:55.984 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.050 [INFO][5327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.050 [INFO][5327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.050 [INFO][5327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.068 [WARNING][5327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.068 [INFO][5327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.072 [INFO][5327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:56.078079 containerd[1463]: 2025-07-07 01:44:56.074 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:44:56.081186 containerd[1463]: time="2025-07-07T01:44:56.079050999Z" level=info msg="TearDown network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" successfully" Jul 7 01:44:56.081186 containerd[1463]: time="2025-07-07T01:44:56.079116422Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" returns successfully" Jul 7 01:44:56.081186 containerd[1463]: time="2025-07-07T01:44:56.080199354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5677bcf49d-d72km,Uid:6e0044f3-eddc-404a-a8a5-e4a322e633c4,Namespace:calico-system,Attempt:1,}" Jul 7 01:44:56.255579 systemd-networkd[1378]: calidd9a828692c: Link UP Jul 7 01:44:56.259810 systemd-networkd[1378]: calidd9a828692c: Gained carrier Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.156 [INFO][5334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0 calico-kube-controllers-5677bcf49d- calico-system 6e0044f3-eddc-404a-a8a5-e4a322e633c4 1022 0 2025-07-07 01:44:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5677bcf49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal calico-kube-controllers-5677bcf49d-d72km eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd9a828692c [] [] }} ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.156 [INFO][5334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.187 [INFO][5345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" HandleID="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.187 [INFO][5345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" HandleID="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"calico-kube-controllers-5677bcf49d-d72km", "timestamp":"2025-07-07 01:44:56.187589656 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.187 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.188 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.188 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.198 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.212 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.221 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.224 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.227 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.228 [INFO][5345] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.229 [INFO][5345] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5 Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.235 [INFO][5345] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.247 [INFO][5345] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.8/26] block=192.168.2.0/26 handle="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.247 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.8/26] handle="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.247 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:44:56.285823 containerd[1463]: 2025-07-07 01:44:56.247 [INFO][5345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.8/26] IPv6=[] ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" HandleID="k8s-pod-network.120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.250 [INFO][5334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0", GenerateName:"calico-kube-controllers-5677bcf49d-", Namespace:"calico-system", SelfLink:"", UID:"6e0044f3-eddc-404a-a8a5-e4a322e633c4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5677bcf49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5677bcf49d-d72km", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd9a828692c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.250 [INFO][5334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.8/32] ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.250 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd9a828692c ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.258 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.260 [INFO][5334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0", GenerateName:"calico-kube-controllers-5677bcf49d-", Namespace:"calico-system", SelfLink:"", UID:"6e0044f3-eddc-404a-a8a5-e4a322e633c4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5677bcf49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5", Pod:"calico-kube-controllers-5677bcf49d-d72km", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd9a828692c", MAC:"da:f7:6b:5c:2d:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:44:56.287456 containerd[1463]: 2025-07-07 01:44:56.283 [INFO][5334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5" Namespace="calico-system" Pod="calico-kube-controllers-5677bcf49d-d72km" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:44:56.318086 containerd[1463]: time="2025-07-07T01:44:56.317244235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:44:56.318086 containerd[1463]: time="2025-07-07T01:44:56.317407053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:44:56.318086 containerd[1463]: time="2025-07-07T01:44:56.317439524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:56.318086 containerd[1463]: time="2025-07-07T01:44:56.317736574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:44:56.341892 systemd[1]: Started cri-containerd-120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5.scope - libcontainer container 120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5. Jul 7 01:44:56.391066 containerd[1463]: time="2025-07-07T01:44:56.390937355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5677bcf49d-d72km,Uid:6e0044f3-eddc-404a-a8a5-e4a322e633c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5\"" Jul 7 01:44:56.461659 systemd[1]: run-containerd-runc-k8s.io-124e799078f20a54fdc35e5ad40de7969364624b566cf0cece2fb124c6baf5a7-runc.o4BbKo.mount: Deactivated successfully. Jul 7 01:44:56.461790 systemd[1]: run-netns-cni\x2d552479af\x2d3a95\x2ddbd9\x2da6ff\x2dfea1aca3d8df.mount: Deactivated successfully. Jul 7 01:44:57.934388 systemd-networkd[1378]: calidd9a828692c: Gained IPv6LL Jul 7 01:44:59.847263 containerd[1463]: time="2025-07-07T01:44:59.847071475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:59.849898 containerd[1463]: time="2025-07-07T01:44:59.849342446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 01:44:59.850205 containerd[1463]: time="2025-07-07T01:44:59.850136613Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:59.853824 containerd[1463]: time="2025-07-07T01:44:59.853479716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:44:59.854523 containerd[1463]: time="2025-07-07T01:44:59.854488467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.321308324s" Jul 7 01:44:59.854619 containerd[1463]: time="2025-07-07T01:44:59.854532160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 01:44:59.858132 containerd[1463]: time="2025-07-07T01:44:59.858095949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 01:44:59.858586 containerd[1463]: time="2025-07-07T01:44:59.858440869Z" level=info msg="CreateContainer within sandbox \"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:44:59.885168 containerd[1463]: time="2025-07-07T01:44:59.885085189Z" level=info msg="CreateContainer within sandbox \"284cacccb3cb76dba8b10fa3eeaf04690ab8eda6a90716579a3ce9b8f55fdc90\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"61e4f170716bd92e5288f782f3ae8ab029566e0feb480b41b59758b707ccff72\"" Jul 7 01:44:59.887639 containerd[1463]: time="2025-07-07T01:44:59.886248141Z" level=info msg="StartContainer for \"61e4f170716bd92e5288f782f3ae8ab029566e0feb480b41b59758b707ccff72\"" Jul 7 01:44:59.974748 systemd[1]: Started cri-containerd-61e4f170716bd92e5288f782f3ae8ab029566e0feb480b41b59758b707ccff72.scope - libcontainer container 61e4f170716bd92e5288f782f3ae8ab029566e0feb480b41b59758b707ccff72. Jul 7 01:45:00.102601 containerd[1463]: time="2025-07-07T01:45:00.099989355Z" level=info msg="StartContainer for \"61e4f170716bd92e5288f782f3ae8ab029566e0feb480b41b59758b707ccff72\" returns successfully" Jul 7 01:45:00.463597 containerd[1463]: time="2025-07-07T01:45:00.463407841Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:00.467324 containerd[1463]: time="2025-07-07T01:45:00.466199754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 01:45:00.475886 kubelet[2614]: I0707 01:45:00.475516 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-jljn9" podStartSLOduration=36.098183703 podStartE2EDuration="51.475415576s" podCreationTimestamp="2025-07-07 01:44:09 +0000 UTC" firstStartedPulling="2025-07-07 01:44:44.478744459 +0000 UTC m=+50.815735694" lastFinishedPulling="2025-07-07 01:44:59.855976323 +0000 UTC m=+66.192967567" observedRunningTime="2025-07-07 01:45:00.470666232 +0000 UTC m=+66.807657547" watchObservedRunningTime="2025-07-07 01:45:00.475415576 +0000 UTC m=+66.812406890" Jul 7 01:45:00.491727 containerd[1463]: time="2025-07-07T01:45:00.491491087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 633.349352ms" Jul 7 01:45:00.491727 containerd[1463]: time="2025-07-07T01:45:00.491551561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 01:45:00.497090 containerd[1463]: time="2025-07-07T01:45:00.497019058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 01:45:00.502787 containerd[1463]: time="2025-07-07T01:45:00.502603685Z" level=info msg="CreateContainer within sandbox \"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:45:00.532900 containerd[1463]: time="2025-07-07T01:45:00.532835320Z" level=info msg="CreateContainer within sandbox \"3ad0980dcbf974893338830f5a4f4fed0a989d9ead4a9fe0411d2125be259986\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"916fdf4115b256448157861f925536ebef652babb23443ae9ecb86d232bc0979\"" Jul 7 01:45:00.534761 containerd[1463]: time="2025-07-07T01:45:00.534277880Z" level=info msg="StartContainer for \"916fdf4115b256448157861f925536ebef652babb23443ae9ecb86d232bc0979\"" Jul 7 01:45:00.574491 systemd[1]: Started cri-containerd-916fdf4115b256448157861f925536ebef652babb23443ae9ecb86d232bc0979.scope - libcontainer container 916fdf4115b256448157861f925536ebef652babb23443ae9ecb86d232bc0979. Jul 7 01:45:01.101305 containerd[1463]: time="2025-07-07T01:45:01.101230231Z" level=info msg="StartContainer for \"916fdf4115b256448157861f925536ebef652babb23443ae9ecb86d232bc0979\" returns successfully" Jul 7 01:45:01.446802 kubelet[2614]: I0707 01:45:01.446576 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:45:02.860345 kubelet[2614]: I0707 01:45:02.859877 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fc4bb86bc-5q8k9" podStartSLOduration=39.518698628 podStartE2EDuration="53.859822799s" podCreationTimestamp="2025-07-07 01:44:09 +0000 UTC" firstStartedPulling="2025-07-07 01:44:46.152955726 +0000 UTC m=+52.489946960" lastFinishedPulling="2025-07-07 01:45:00.494079847 +0000 UTC m=+66.831071131" observedRunningTime="2025-07-07 01:45:01.475397836 +0000 UTC m=+67.812389100" watchObservedRunningTime="2025-07-07 01:45:02.859822799 +0000 UTC m=+69.196814034" Jul 7 01:45:04.627466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011467118.mount: Deactivated successfully. Jul 7 01:45:04.662363 containerd[1463]: time="2025-07-07T01:45:04.662243067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:04.664892 containerd[1463]: time="2025-07-07T01:45:04.664808480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 01:45:04.666631 containerd[1463]: time="2025-07-07T01:45:04.666591268Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:04.671341 containerd[1463]: time="2025-07-07T01:45:04.670518378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:04.672424 containerd[1463]: time="2025-07-07T01:45:04.671734790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.174619592s" Jul 7 01:45:04.672424 containerd[1463]: time="2025-07-07T01:45:04.671801375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 01:45:04.675328 containerd[1463]: time="2025-07-07T01:45:04.675029377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 01:45:04.676862 containerd[1463]: time="2025-07-07T01:45:04.676546415Z" level=info msg="CreateContainer within sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 01:45:04.706196 containerd[1463]: time="2025-07-07T01:45:04.706139753Z" level=info msg="CreateContainer within sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\"" Jul 7 01:45:04.707696 containerd[1463]: time="2025-07-07T01:45:04.707652634Z" level=info msg="StartContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\"" Jul 7 01:45:04.776545 systemd[1]: Started cri-containerd-04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d.scope - libcontainer container 04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d. Jul 7 01:45:04.835350 containerd[1463]: time="2025-07-07T01:45:04.834717942Z" level=info msg="StartContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" returns successfully" Jul 7 01:45:05.508125 containerd[1463]: time="2025-07-07T01:45:05.506963137Z" level=info msg="StopContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" with timeout 30 (s)" Jul 7 01:45:05.508547 containerd[1463]: time="2025-07-07T01:45:05.508210998Z" level=info msg="StopContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" with timeout 30 (s)" Jul 7 01:45:05.516842 containerd[1463]: time="2025-07-07T01:45:05.516669824Z" level=info msg="Stop container \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" with signal terminated" Jul 7 01:45:05.518635 containerd[1463]: time="2025-07-07T01:45:05.518580564Z" level=info msg="Stop container \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" with signal terminated" Jul 7 01:45:05.557347 kubelet[2614]: I0707 01:45:05.554468 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f84455d77-m9nc9" podStartSLOduration=27.526920565 podStartE2EDuration="48.55406276s" podCreationTimestamp="2025-07-07 01:44:17 +0000 UTC" firstStartedPulling="2025-07-07 01:44:43.646021818 +0000 UTC m=+49.983013062" lastFinishedPulling="2025-07-07 01:45:04.673164003 +0000 UTC m=+71.010155257" observedRunningTime="2025-07-07 01:45:05.552962417 +0000 UTC m=+71.889953651" watchObservedRunningTime="2025-07-07 01:45:05.55406276 +0000 UTC m=+71.891054004" Jul 7 01:45:05.571807 systemd[1]: cri-containerd-04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d.scope: Deactivated successfully. Jul 7 01:45:05.586278 systemd[1]: cri-containerd-90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e.scope: Deactivated successfully. Jul 7 01:45:05.623875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e-rootfs.mount: Deactivated successfully. Jul 7 01:45:05.634145 containerd[1463]: time="2025-07-07T01:45:05.633994307Z" level=info msg="shim disconnected" id=90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e namespace=k8s.io Jul 7 01:45:05.634360 containerd[1463]: time="2025-07-07T01:45:05.634141304Z" level=warning msg="cleaning up after shim disconnected" id=90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e namespace=k8s.io Jul 7 01:45:05.634360 containerd[1463]: time="2025-07-07T01:45:05.634164517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:45:05.644890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d-rootfs.mount: Deactivated successfully. Jul 7 01:45:06.438836 containerd[1463]: time="2025-07-07T01:45:06.438498286Z" level=info msg="StopContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" returns successfully" Jul 7 01:45:06.442453 containerd[1463]: time="2025-07-07T01:45:06.440346817Z" level=info msg="shim disconnected" id=04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d namespace=k8s.io Jul 7 01:45:06.442453 containerd[1463]: time="2025-07-07T01:45:06.440449842Z" level=warning msg="cleaning up after shim disconnected" id=04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d namespace=k8s.io Jul 7 01:45:06.442453 containerd[1463]: time="2025-07-07T01:45:06.440471222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:45:06.493545 containerd[1463]: time="2025-07-07T01:45:06.493347779Z" level=info msg="StopContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" returns successfully" Jul 7 01:45:06.497384 containerd[1463]: time="2025-07-07T01:45:06.495779570Z" level=info msg="StopPodSandbox for \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\"" Jul 7 01:45:06.497384 containerd[1463]: time="2025-07-07T01:45:06.496085355Z" level=info msg="Container to stop \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:45:06.497384 containerd[1463]: time="2025-07-07T01:45:06.496191215Z" level=info msg="Container to stop \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 01:45:06.510857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b-shm.mount: Deactivated successfully. Jul 7 01:45:06.525825 systemd[1]: cri-containerd-25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b.scope: Deactivated successfully. Jul 7 01:45:06.557324 containerd[1463]: time="2025-07-07T01:45:06.555728940Z" level=info msg="shim disconnected" id=25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b namespace=k8s.io Jul 7 01:45:06.556830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b-rootfs.mount: Deactivated successfully. Jul 7 01:45:06.557632 containerd[1463]: time="2025-07-07T01:45:06.557607799Z" level=warning msg="cleaning up after shim disconnected" id=25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b namespace=k8s.io Jul 7 01:45:06.558144 containerd[1463]: time="2025-07-07T01:45:06.558054380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 01:45:06.574337 containerd[1463]: time="2025-07-07T01:45:06.573729558Z" level=warning msg="cleanup warnings time=\"2025-07-07T01:45:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 01:45:06.661794 systemd-networkd[1378]: cali3190f25df87: Link DOWN Jul 7 01:45:06.661805 systemd-networkd[1378]: cali3190f25df87: Lost carrier Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.655 [INFO][5661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.655 [INFO][5661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" iface="eth0" netns="/var/run/netns/cni-7ab1d669-b991-2b90-77e8-98e4ac5d363e" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.656 [INFO][5661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" iface="eth0" netns="/var/run/netns/cni-7ab1d669-b991-2b90-77e8-98e4ac5d363e" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.670 [INFO][5661] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" after=14.454379ms iface="eth0" netns="/var/run/netns/cni-7ab1d669-b991-2b90-77e8-98e4ac5d363e" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.670 [INFO][5661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.670 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.714 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.714 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.714 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.787 [INFO][5672] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.787 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.790 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:06.793550 containerd[1463]: 2025-07-07 01:45:06.791 [INFO][5661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:06.796486 containerd[1463]: time="2025-07-07T01:45:06.794020413Z" level=info msg="TearDown network for sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" successfully" Jul 7 01:45:06.796486 containerd[1463]: time="2025-07-07T01:45:06.794056871Z" level=info msg="StopPodSandbox for \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" returns successfully" Jul 7 01:45:06.797654 systemd[1]: run-netns-cni\x2d7ab1d669\x2db991\x2d2b90\x2d77e8\x2d98e4ac5d363e.mount: Deactivated successfully. Jul 7 01:45:06.861105 kubelet[2614]: I0707 01:45:06.861048 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e744049-be52-495d-b225-079659d54e9f-whisker-backend-key-pair\") pod \"6e744049-be52-495d-b225-079659d54e9f\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " Jul 7 01:45:06.861595 kubelet[2614]: I0707 01:45:06.861130 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e744049-be52-495d-b225-079659d54e9f-whisker-ca-bundle\") pod \"6e744049-be52-495d-b225-079659d54e9f\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " Jul 7 01:45:06.861595 kubelet[2614]: I0707 01:45:06.861166 2614 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxw4m\" (UniqueName: \"kubernetes.io/projected/6e744049-be52-495d-b225-079659d54e9f-kube-api-access-qxw4m\") pod \"6e744049-be52-495d-b225-079659d54e9f\" (UID: \"6e744049-be52-495d-b225-079659d54e9f\") " Jul 7 01:45:06.865356 kubelet[2614]: I0707 01:45:06.863772 2614 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e744049-be52-495d-b225-079659d54e9f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6e744049-be52-495d-b225-079659d54e9f" (UID: "6e744049-be52-495d-b225-079659d54e9f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 01:45:06.870323 kubelet[2614]: I0707 01:45:06.869003 2614 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e744049-be52-495d-b225-079659d54e9f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6e744049-be52-495d-b225-079659d54e9f" (UID: "6e744049-be52-495d-b225-079659d54e9f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 01:45:06.872440 kubelet[2614]: I0707 01:45:06.871483 2614 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e744049-be52-495d-b225-079659d54e9f-kube-api-access-qxw4m" (OuterVolumeSpecName: "kube-api-access-qxw4m") pod "6e744049-be52-495d-b225-079659d54e9f" (UID: "6e744049-be52-495d-b225-079659d54e9f"). InnerVolumeSpecName "kube-api-access-qxw4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 01:45:06.872955 systemd[1]: var-lib-kubelet-pods-6e744049\x2dbe52\x2d495d\x2db225\x2d079659d54e9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqxw4m.mount: Deactivated successfully. Jul 7 01:45:06.873080 systemd[1]: var-lib-kubelet-pods-6e744049\x2dbe52\x2d495d\x2db225\x2d079659d54e9f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 01:45:06.961827 kubelet[2614]: I0707 01:45:06.961700 2614 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qxw4m\" (UniqueName: \"kubernetes.io/projected/6e744049-be52-495d-b225-079659d54e9f-kube-api-access-qxw4m\") on node \"ci-4081-3-4-7-c803550fde.novalocal\" DevicePath \"\"" Jul 7 01:45:06.961827 kubelet[2614]: I0707 01:45:06.961798 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e744049-be52-495d-b225-079659d54e9f-whisker-backend-key-pair\") on node \"ci-4081-3-4-7-c803550fde.novalocal\" DevicePath \"\"" Jul 7 01:45:06.961827 kubelet[2614]: I0707 01:45:06.961828 2614 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e744049-be52-495d-b225-079659d54e9f-whisker-ca-bundle\") on node \"ci-4081-3-4-7-c803550fde.novalocal\" DevicePath \"\"" Jul 7 01:45:07.515126 kubelet[2614]: I0707 01:45:07.514135 2614 scope.go:117] "RemoveContainer" containerID="04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d" Jul 7 01:45:07.538660 containerd[1463]: time="2025-07-07T01:45:07.537464051Z" level=info msg="RemoveContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\"" Jul 7 01:45:07.542761 systemd[1]: Removed slice kubepods-besteffort-pod6e744049_be52_495d_b225_079659d54e9f.slice - libcontainer container kubepods-besteffort-pod6e744049_be52_495d_b225_079659d54e9f.slice. Jul 7 01:45:07.563866 containerd[1463]: time="2025-07-07T01:45:07.562541907Z" level=info msg="RemoveContainer for \"04815b7bc1df164e74e60e426428160f4ff08190df65c6473680ca0b0b9df28d\" returns successfully" Jul 7 01:45:07.564168 kubelet[2614]: I0707 01:45:07.563637 2614 scope.go:117] "RemoveContainer" containerID="90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e" Jul 7 01:45:07.566501 containerd[1463]: time="2025-07-07T01:45:07.566453074Z" level=info msg="RemoveContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\"" Jul 7 01:45:07.574120 containerd[1463]: time="2025-07-07T01:45:07.573191847Z" level=info msg="RemoveContainer for \"90558c13c288a65af455f6b1d5e1aede5bd7d5318aa8952f680083c54d32236e\" returns successfully" Jul 7 01:45:07.711245 kubelet[2614]: I0707 01:45:07.711145 2614 memory_manager.go:355] "RemoveStaleState removing state" podUID="6e744049-be52-495d-b225-079659d54e9f" containerName="whisker" Jul 7 01:45:07.711245 kubelet[2614]: I0707 01:45:07.711232 2614 memory_manager.go:355] "RemoveStaleState removing state" podUID="6e744049-be52-495d-b225-079659d54e9f" containerName="whisker-backend" Jul 7 01:45:07.736054 systemd[1]: Created slice kubepods-besteffort-pod42755d6f_016d_4058_8fd5_90932768294f.slice - libcontainer container kubepods-besteffort-pod42755d6f_016d_4058_8fd5_90932768294f.slice. Jul 7 01:45:07.771393 kubelet[2614]: I0707 01:45:07.769565 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42755d6f-016d-4058-8fd5-90932768294f-whisker-ca-bundle\") pod \"whisker-5855476795-69dng\" (UID: \"42755d6f-016d-4058-8fd5-90932768294f\") " pod="calico-system/whisker-5855476795-69dng" Jul 7 01:45:07.772530 kubelet[2614]: I0707 01:45:07.772393 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/42755d6f-016d-4058-8fd5-90932768294f-whisker-backend-key-pair\") pod \"whisker-5855476795-69dng\" (UID: \"42755d6f-016d-4058-8fd5-90932768294f\") " pod="calico-system/whisker-5855476795-69dng" Jul 7 01:45:07.772530 kubelet[2614]: I0707 01:45:07.772444 2614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q95f\" (UniqueName: \"kubernetes.io/projected/42755d6f-016d-4058-8fd5-90932768294f-kube-api-access-9q95f\") pod \"whisker-5855476795-69dng\" (UID: \"42755d6f-016d-4058-8fd5-90932768294f\") " pod="calico-system/whisker-5855476795-69dng" Jul 7 01:45:07.829377 kubelet[2614]: I0707 01:45:07.828899 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e744049-be52-495d-b225-079659d54e9f" path="/var/lib/kubelet/pods/6e744049-be52-495d-b225-079659d54e9f/volumes" Jul 7 01:45:08.043860 containerd[1463]: time="2025-07-07T01:45:08.043593906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5855476795-69dng,Uid:42755d6f-016d-4058-8fd5-90932768294f,Namespace:calico-system,Attempt:0,}" Jul 7 01:45:08.387079 systemd-networkd[1378]: calif8328e217aa: Link UP Jul 7 01:45:08.390799 systemd-networkd[1378]: calif8328e217aa: Gained carrier Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.175 [INFO][5700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0 whisker-5855476795- calico-system 42755d6f-016d-4058-8fd5-90932768294f 1096 0 2025-07-07 01:45:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5855476795 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-4-7-c803550fde.novalocal whisker-5855476795-69dng eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif8328e217aa [] [] }} ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.175 [INFO][5700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.269 [INFO][5712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" HandleID="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.269 [INFO][5712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" HandleID="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123c20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-4-7-c803550fde.novalocal", "pod":"whisker-5855476795-69dng", "timestamp":"2025-07-07 01:45:08.26975624 +0000 UTC"}, Hostname:"ci-4081-3-4-7-c803550fde.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.270 [INFO][5712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.270 [INFO][5712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.270 [INFO][5712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-4-7-c803550fde.novalocal' Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.291 [INFO][5712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.308 [INFO][5712] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.321 [INFO][5712] ipam/ipam.go 511: Trying affinity for 192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.324 [INFO][5712] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.329 [INFO][5712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.0/26 host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.330 [INFO][5712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.0/26 handle="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.333 [INFO][5712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2 Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.357 [INFO][5712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.0/26 handle="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.373 [INFO][5712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.9/26] block=192.168.2.0/26 handle="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.374 [INFO][5712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.9/26] handle="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" host="ci-4081-3-4-7-c803550fde.novalocal" Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.374 [INFO][5712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:08.443351 containerd[1463]: 2025-07-07 01:45:08.374 [INFO][5712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.9/26] IPv6=[] ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" HandleID="k8s-pod-network.5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.378 [INFO][5700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0", GenerateName:"whisker-5855476795-", Namespace:"calico-system", SelfLink:"", UID:"42755d6f-016d-4058-8fd5-90932768294f", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5855476795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"", Pod:"whisker-5855476795-69dng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8328e217aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.378 [INFO][5700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.9/32] ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.378 [INFO][5700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8328e217aa ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.394 [INFO][5700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.396 [INFO][5700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0", GenerateName:"whisker-5855476795-", Namespace:"calico-system", SelfLink:"", UID:"42755d6f-016d-4058-8fd5-90932768294f", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5855476795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2", Pod:"whisker-5855476795-69dng", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8328e217aa", MAC:"52:26:46:e1:e6:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:45:08.446539 containerd[1463]: 2025-07-07 01:45:08.432 [INFO][5700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2" Namespace="calico-system" Pod="whisker-5855476795-69dng" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--5855476795--69dng-eth0" Jul 7 01:45:08.499308 containerd[1463]: time="2025-07-07T01:45:08.498594354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 01:45:08.499308 containerd[1463]: time="2025-07-07T01:45:08.498657783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 01:45:08.499308 containerd[1463]: time="2025-07-07T01:45:08.498676979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:45:08.499308 containerd[1463]: time="2025-07-07T01:45:08.498810972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 01:45:08.562477 systemd[1]: Started cri-containerd-5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2.scope - libcontainer container 5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2. Jul 7 01:45:08.667654 containerd[1463]: time="2025-07-07T01:45:08.667468040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:08.670982 containerd[1463]: time="2025-07-07T01:45:08.670926694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 01:45:08.672401 containerd[1463]: time="2025-07-07T01:45:08.672367989Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:08.676466 containerd[1463]: time="2025-07-07T01:45:08.676427023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:08.677794 containerd[1463]: time="2025-07-07T01:45:08.677749895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 4.002619417s" Jul 7 01:45:08.677854 containerd[1463]: time="2025-07-07T01:45:08.677800861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 01:45:08.681426 containerd[1463]: time="2025-07-07T01:45:08.680981992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 01:45:08.684846 containerd[1463]: time="2025-07-07T01:45:08.684056593Z" level=info msg="CreateContainer within sandbox \"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 01:45:08.720767 containerd[1463]: time="2025-07-07T01:45:08.720682093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5855476795-69dng,Uid:42755d6f-016d-4058-8fd5-90932768294f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2\"" Jul 7 01:45:08.726637 containerd[1463]: time="2025-07-07T01:45:08.726495993Z" level=info msg="CreateContainer within sandbox \"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 01:45:08.732196 containerd[1463]: time="2025-07-07T01:45:08.732077515Z" level=info msg="CreateContainer within sandbox \"d81d11f64fdc6b2a208deff58b31d0596254ebca5d88d8d4296475eed2608cb7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7466cf0ff91f23296d68c1889d432537a63459f12ede83a4a1e2e51750c74e97\"" Jul 7 01:45:08.736454 containerd[1463]: time="2025-07-07T01:45:08.735990825Z" level=info msg="StartContainer for \"7466cf0ff91f23296d68c1889d432537a63459f12ede83a4a1e2e51750c74e97\"" Jul 7 01:45:08.761037 containerd[1463]: time="2025-07-07T01:45:08.760987715Z" level=info msg="CreateContainer within sandbox \"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2947653114dbd95e9e34573f19786593ac8f5d5403287033130e8934f91dc4a0\"" Jul 7 01:45:08.763350 containerd[1463]: time="2025-07-07T01:45:08.762630148Z" level=info msg="StartContainer for \"2947653114dbd95e9e34573f19786593ac8f5d5403287033130e8934f91dc4a0\"" Jul 7 01:45:08.782758 systemd[1]: Started cri-containerd-7466cf0ff91f23296d68c1889d432537a63459f12ede83a4a1e2e51750c74e97.scope - libcontainer container 7466cf0ff91f23296d68c1889d432537a63459f12ede83a4a1e2e51750c74e97. Jul 7 01:45:08.819943 systemd[1]: Started cri-containerd-2947653114dbd95e9e34573f19786593ac8f5d5403287033130e8934f91dc4a0.scope - libcontainer container 2947653114dbd95e9e34573f19786593ac8f5d5403287033130e8934f91dc4a0. Jul 7 01:45:08.894051 containerd[1463]: time="2025-07-07T01:45:08.892563354Z" level=info msg="StartContainer for \"7466cf0ff91f23296d68c1889d432537a63459f12ede83a4a1e2e51750c74e97\" returns successfully" Jul 7 01:45:08.918810 containerd[1463]: time="2025-07-07T01:45:08.918577148Z" level=info msg="StartContainer for \"2947653114dbd95e9e34573f19786593ac8f5d5403287033130e8934f91dc4a0\" returns successfully" Jul 7 01:45:08.923602 containerd[1463]: time="2025-07-07T01:45:08.923519937Z" level=info msg="CreateContainer within sandbox \"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 01:45:08.966553 containerd[1463]: time="2025-07-07T01:45:08.966500347Z" level=info msg="CreateContainer within sandbox \"5bd3496b80f4e775b573e2ca8d89966ed2da575b412841b111cb08700a9800b2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e72bbb0bd01c546d88daed7e44d936d643fc6b1f7a12441f9d9baffe006afecb\"" Jul 7 01:45:08.970345 containerd[1463]: time="2025-07-07T01:45:08.969051192Z" level=info msg="StartContainer for \"e72bbb0bd01c546d88daed7e44d936d643fc6b1f7a12441f9d9baffe006afecb\"" Jul 7 01:45:08.985851 kubelet[2614]: I0707 01:45:08.985804 2614 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 01:45:08.987304 kubelet[2614]: I0707 01:45:08.987088 2614 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 01:45:09.042563 systemd[1]: Started cri-containerd-e72bbb0bd01c546d88daed7e44d936d643fc6b1f7a12441f9d9baffe006afecb.scope - libcontainer container e72bbb0bd01c546d88daed7e44d936d643fc6b1f7a12441f9d9baffe006afecb. Jul 7 01:45:09.103716 containerd[1463]: time="2025-07-07T01:45:09.103631893Z" level=info msg="StartContainer for \"e72bbb0bd01c546d88daed7e44d936d643fc6b1f7a12441f9d9baffe006afecb\" returns successfully" Jul 7 01:45:09.591362 kubelet[2614]: I0707 01:45:09.589364 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lcqqq" podStartSLOduration=31.718612269 podStartE2EDuration="56.58724702s" podCreationTimestamp="2025-07-07 01:44:13 +0000 UTC" firstStartedPulling="2025-07-07 01:44:43.811505666 +0000 UTC m=+50.148496900" lastFinishedPulling="2025-07-07 01:45:08.680140397 +0000 UTC m=+75.017131651" observedRunningTime="2025-07-07 01:45:09.582489159 +0000 UTC m=+75.919480504" watchObservedRunningTime="2025-07-07 01:45:09.58724702 +0000 UTC m=+75.924238334" Jul 7 01:45:09.643307 kubelet[2614]: I0707 01:45:09.641590 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5855476795-69dng" podStartSLOduration=2.641568259 podStartE2EDuration="2.641568259s" podCreationTimestamp="2025-07-07 01:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:45:09.636206832 +0000 UTC m=+75.973198116" watchObservedRunningTime="2025-07-07 01:45:09.641568259 +0000 UTC m=+75.978559504" Jul 7 01:45:10.415024 systemd-networkd[1378]: calif8328e217aa: Gained IPv6LL Jul 7 01:45:13.607728 containerd[1463]: time="2025-07-07T01:45:13.607573894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:13.612048 containerd[1463]: time="2025-07-07T01:45:13.611312411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 01:45:13.614376 containerd[1463]: time="2025-07-07T01:45:13.613876539Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:13.621368 containerd[1463]: time="2025-07-07T01:45:13.620529765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:45:13.621684 containerd[1463]: time="2025-07-07T01:45:13.621643852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.940612657s" Jul 7 01:45:13.621813 containerd[1463]: time="2025-07-07T01:45:13.621784286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 01:45:13.652262 containerd[1463]: time="2025-07-07T01:45:13.651965405Z" level=info msg="CreateContainer within sandbox \"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 01:45:13.698961 containerd[1463]: time="2025-07-07T01:45:13.698879846Z" level=info msg="CreateContainer within sandbox \"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14\"" Jul 7 01:45:13.703071 containerd[1463]: time="2025-07-07T01:45:13.703009520Z" level=info msg="StartContainer for \"0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14\"" Jul 7 01:45:13.828559 systemd[1]: Started cri-containerd-0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14.scope - libcontainer container 0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14. Jul 7 01:45:14.149637 containerd[1463]: time="2025-07-07T01:45:14.149185699Z" level=info msg="StartContainer for \"0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14\" returns successfully" Jul 7 01:45:14.601661 kubelet[2614]: I0707 01:45:14.600214 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5677bcf49d-d72km" podStartSLOduration=44.366404534 podStartE2EDuration="1m1.600140067s" podCreationTimestamp="2025-07-07 01:44:13 +0000 UTC" firstStartedPulling="2025-07-07 01:44:56.392780512 +0000 UTC m=+62.729771746" lastFinishedPulling="2025-07-07 01:45:13.626515995 +0000 UTC m=+79.963507279" observedRunningTime="2025-07-07 01:45:14.59749104 +0000 UTC m=+80.934482294" watchObservedRunningTime="2025-07-07 01:45:14.600140067 +0000 UTC m=+80.937131302" Jul 7 01:45:23.571310 kubelet[2614]: I0707 01:45:23.571201 2614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:45:25.475475 systemd[1]: run-containerd-runc-k8s.io-67385d580f5d1d14fdf19272291e7bfbc0820d5709e82b215835a6dcd8aad500-runc.DPSrB6.mount: Deactivated successfully. Jul 7 01:45:32.907265 systemd[1]: run-containerd-runc-k8s.io-0cb11198ea1d69e513f92f65680cbf6ba7ec613ac615273bd3cca9484b700e14-runc.ItfflV.mount: Deactivated successfully. Jul 7 01:45:33.270662 systemd[1]: Started sshd@9-172.24.4.32:22-172.24.4.1:36368.service - OpenSSH per-connection server daemon (172.24.4.1:36368). Jul 7 01:45:34.510702 sshd[6066]: Accepted publickey for core from 172.24.4.1 port 36368 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:34.512552 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:34.525991 systemd-logind[1449]: New session 12 of user core. Jul 7 01:45:34.532571 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 01:45:35.345595 sshd[6066]: pam_unix(sshd:session): session closed for user core Jul 7 01:45:35.350331 systemd[1]: sshd@9-172.24.4.32:22-172.24.4.1:36368.service: Deactivated successfully. Jul 7 01:45:35.354119 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 01:45:35.356576 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jul 7 01:45:35.357964 systemd-logind[1449]: Removed session 12. Jul 7 01:45:40.374783 systemd[1]: Started sshd@10-172.24.4.32:22-172.24.4.1:43442.service - OpenSSH per-connection server daemon (172.24.4.1:43442). Jul 7 01:45:41.371718 sshd[6080]: Accepted publickey for core from 172.24.4.1 port 43442 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:41.374896 sshd[6080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:41.385903 systemd-logind[1449]: New session 13 of user core. Jul 7 01:45:41.392579 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 01:45:42.243662 sshd[6080]: pam_unix(sshd:session): session closed for user core Jul 7 01:45:42.249358 systemd[1]: sshd@10-172.24.4.32:22-172.24.4.1:43442.service: Deactivated successfully. Jul 7 01:45:42.254493 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 01:45:42.261093 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jul 7 01:45:42.262978 systemd-logind[1449]: Removed session 13. Jul 7 01:45:47.266916 systemd[1]: Started sshd@11-172.24.4.32:22-172.24.4.1:59924.service - OpenSSH per-connection server daemon (172.24.4.1:59924). Jul 7 01:45:48.730185 sshd[6132]: Accepted publickey for core from 172.24.4.1 port 59924 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:48.765550 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:48.798713 systemd-logind[1449]: New session 14 of user core. Jul 7 01:45:48.805974 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 01:45:49.554162 sshd[6132]: pam_unix(sshd:session): session closed for user core Jul 7 01:45:49.563806 systemd[1]: sshd@11-172.24.4.32:22-172.24.4.1:59924.service: Deactivated successfully. Jul 7 01:45:49.566212 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 01:45:49.569089 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jul 7 01:45:49.575972 systemd[1]: Started sshd@12-172.24.4.32:22-172.24.4.1:59928.service - OpenSSH per-connection server daemon (172.24.4.1:59928). Jul 7 01:45:49.579741 systemd-logind[1449]: Removed session 14. Jul 7 01:45:50.824900 sshd[6148]: Accepted publickey for core from 172.24.4.1 port 59928 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:50.827957 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:50.836173 systemd-logind[1449]: New session 15 of user core. Jul 7 01:45:50.841471 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 01:45:51.490445 sshd[6148]: pam_unix(sshd:session): session closed for user core Jul 7 01:45:51.498624 systemd[1]: sshd@12-172.24.4.32:22-172.24.4.1:59928.service: Deactivated successfully. Jul 7 01:45:51.502448 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 01:45:51.504274 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jul 7 01:45:51.516156 systemd[1]: Started sshd@13-172.24.4.32:22-172.24.4.1:59940.service - OpenSSH per-connection server daemon (172.24.4.1:59940). Jul 7 01:45:51.521045 systemd-logind[1449]: Removed session 15. Jul 7 01:45:52.882359 sshd[6159]: Accepted publickey for core from 172.24.4.1 port 59940 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:52.889405 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:52.897580 systemd-logind[1449]: New session 16 of user core. Jul 7 01:45:52.906189 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 01:45:53.660545 sshd[6159]: pam_unix(sshd:session): session closed for user core Jul 7 01:45:53.666682 systemd[1]: sshd@13-172.24.4.32:22-172.24.4.1:59940.service: Deactivated successfully. Jul 7 01:45:53.671910 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 01:45:53.673418 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jul 7 01:45:53.675036 systemd-logind[1449]: Removed session 16. Jul 7 01:45:55.734767 containerd[1463]: time="2025-07-07T01:45:55.734494792Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.875 [WARNING][6202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0", GenerateName:"calico-kube-controllers-5677bcf49d-", Namespace:"calico-system", SelfLink:"", UID:"6e0044f3-eddc-404a-a8a5-e4a322e633c4", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5677bcf49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5", Pod:"calico-kube-controllers-5677bcf49d-d72km", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd9a828692c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.876 [INFO][6202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.876 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" iface="eth0" netns="" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.876 [INFO][6202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.876 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.931 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.932 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.932 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.948 [WARNING][6209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.948 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.951 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:55.957981 containerd[1463]: 2025-07-07 01:45:55.954 [INFO][6202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:55.960557 containerd[1463]: time="2025-07-07T01:45:55.959189711Z" level=info msg="TearDown network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" successfully" Jul 7 01:45:55.960557 containerd[1463]: time="2025-07-07T01:45:55.959249393Z" level=info msg="StopPodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" returns successfully" Jul 7 01:45:55.960557 containerd[1463]: time="2025-07-07T01:45:55.960090744Z" level=info msg="RemovePodSandbox for \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:45:55.960557 containerd[1463]: time="2025-07-07T01:45:55.960152169Z" level=info msg="Forcibly stopping sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\"" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.044 [WARNING][6223] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0", GenerateName:"calico-kube-controllers-5677bcf49d-", Namespace:"calico-system", SelfLink:"", UID:"6e0044f3-eddc-404a-a8a5-e4a322e633c4", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5677bcf49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-4-7-c803550fde.novalocal", ContainerID:"120f3298a2c222e9e4dd0bf8eecf4845812685a68b5b644718fac55c9988d3d5", Pod:"calico-kube-controllers-5677bcf49d-d72km", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd9a828692c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.045 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.045 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" iface="eth0" netns="" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.046 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.047 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.085 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.085 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.086 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.096 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.096 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" HandleID="k8s-pod-network.f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-calico--kube--controllers--5677bcf49d--d72km-eth0" Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.098 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:56.102501 containerd[1463]: 2025-07-07 01:45:56.100 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c" Jul 7 01:45:56.104210 containerd[1463]: time="2025-07-07T01:45:56.103244118Z" level=info msg="TearDown network for sandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" successfully" Jul 7 01:45:56.109736 containerd[1463]: time="2025-07-07T01:45:56.109687566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:45:56.110081 containerd[1463]: time="2025-07-07T01:45:56.110018538Z" level=info msg="RemovePodSandbox \"f7b4220cac37c1f9f3ad209bef159df303df52ae88606af6f5ec4b57a6ac091c\" returns successfully" Jul 7 01:45:56.112390 containerd[1463]: time="2025-07-07T01:45:56.111636038Z" level=info msg="StopPodSandbox for \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\"" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.163 [WARNING][6244] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.164 [INFO][6244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.164 [INFO][6244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" iface="eth0" netns="" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.164 [INFO][6244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.164 [INFO][6244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.198 [INFO][6252] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.199 [INFO][6252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.199 [INFO][6252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.207 [WARNING][6252] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.207 [INFO][6252] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.210 [INFO][6252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:56.215341 containerd[1463]: 2025-07-07 01:45:56.212 [INFO][6244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.216939 containerd[1463]: time="2025-07-07T01:45:56.216579680Z" level=info msg="TearDown network for sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" successfully" Jul 7 01:45:56.216939 containerd[1463]: time="2025-07-07T01:45:56.216624564Z" level=info msg="StopPodSandbox for \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" returns successfully" Jul 7 01:45:56.218932 containerd[1463]: time="2025-07-07T01:45:56.218512742Z" level=info msg="RemovePodSandbox for \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\"" Jul 7 01:45:56.218932 containerd[1463]: time="2025-07-07T01:45:56.218548289Z" level=info msg="Forcibly stopping sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\"" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.281 [WARNING][6267] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" WorkloadEndpoint="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.282 [INFO][6267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.282 [INFO][6267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" iface="eth0" netns="" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.282 [INFO][6267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.282 [INFO][6267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.318 [INFO][6273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.318 [INFO][6273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.318 [INFO][6273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.327 [WARNING][6273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.327 [INFO][6273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" HandleID="k8s-pod-network.25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Workload="ci--4081--3--4--7--c803550fde.novalocal-k8s-whisker--7f84455d77--m9nc9-eth0" Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.329 [INFO][6273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:45:56.333632 containerd[1463]: 2025-07-07 01:45:56.332 [INFO][6267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b" Jul 7 01:45:56.335077 containerd[1463]: time="2025-07-07T01:45:56.334413865Z" level=info msg="TearDown network for sandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" successfully" Jul 7 01:45:56.340039 containerd[1463]: time="2025-07-07T01:45:56.340000623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 01:45:56.340384 containerd[1463]: time="2025-07-07T01:45:56.340346132Z" level=info msg="RemovePodSandbox \"25d89002c9ba3e6c5659032617bc166cbd4cb14cd14d03cca44cd68e77389b1b\" returns successfully" Jul 7 01:45:58.686743 systemd[1]: Started sshd@14-172.24.4.32:22-172.24.4.1:43460.service - OpenSSH per-connection server daemon (172.24.4.1:43460). Jul 7 01:45:59.868333 sshd[6280]: Accepted publickey for core from 172.24.4.1 port 43460 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:45:59.872540 sshd[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:45:59.889437 systemd-logind[1449]: New session 17 of user core. Jul 7 01:45:59.896484 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 01:46:00.643392 sshd[6280]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:00.648956 systemd[1]: sshd@14-172.24.4.32:22-172.24.4.1:43460.service: Deactivated successfully. Jul 7 01:46:00.654385 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 01:46:00.659036 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jul 7 01:46:00.661738 systemd-logind[1449]: Removed session 17. Jul 7 01:46:05.670145 systemd[1]: Started sshd@15-172.24.4.32:22-172.24.4.1:51822.service - OpenSSH per-connection server daemon (172.24.4.1:51822). Jul 7 01:46:06.832889 sshd[6299]: Accepted publickey for core from 172.24.4.1 port 51822 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:06.838007 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:06.856600 systemd-logind[1449]: New session 18 of user core. Jul 7 01:46:06.864709 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 01:46:07.631846 sshd[6299]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:07.646123 systemd[1]: sshd@15-172.24.4.32:22-172.24.4.1:51822.service: Deactivated successfully. Jul 7 01:46:07.653553 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 01:46:07.657716 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jul 7 01:46:07.661190 systemd-logind[1449]: Removed session 18. Jul 7 01:46:12.654758 systemd[1]: Started sshd@16-172.24.4.32:22-172.24.4.1:51836.service - OpenSSH per-connection server daemon (172.24.4.1:51836). Jul 7 01:46:13.978410 sshd[6318]: Accepted publickey for core from 172.24.4.1 port 51836 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:13.980324 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:13.990110 systemd-logind[1449]: New session 19 of user core. Jul 7 01:46:13.997534 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 01:46:14.876163 sshd[6318]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:14.881842 systemd[1]: sshd@16-172.24.4.32:22-172.24.4.1:51836.service: Deactivated successfully. Jul 7 01:46:14.890854 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 01:46:14.891931 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jul 7 01:46:14.893099 systemd-logind[1449]: Removed session 19. Jul 7 01:46:19.892694 systemd[1]: Started sshd@17-172.24.4.32:22-172.24.4.1:37114.service - OpenSSH per-connection server daemon (172.24.4.1:37114). Jul 7 01:46:20.915974 sshd[6415]: Accepted publickey for core from 172.24.4.1 port 37114 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:20.919708 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:20.931975 systemd-logind[1449]: New session 20 of user core. Jul 7 01:46:20.936151 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 01:46:21.529451 sshd[6415]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:21.540089 systemd[1]: sshd@17-172.24.4.32:22-172.24.4.1:37114.service: Deactivated successfully. Jul 7 01:46:21.544568 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 01:46:21.548667 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jul 7 01:46:21.563177 systemd[1]: Started sshd@18-172.24.4.32:22-172.24.4.1:37128.service - OpenSSH per-connection server daemon (172.24.4.1:37128). Jul 7 01:46:21.568180 systemd-logind[1449]: Removed session 20. Jul 7 01:46:22.750952 sshd[6428]: Accepted publickey for core from 172.24.4.1 port 37128 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:22.752609 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:22.761141 systemd-logind[1449]: New session 21 of user core. Jul 7 01:46:22.767340 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 01:46:24.102640 sshd[6428]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:24.112333 systemd[1]: sshd@18-172.24.4.32:22-172.24.4.1:37128.service: Deactivated successfully. Jul 7 01:46:24.116782 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 01:46:24.119007 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jul 7 01:46:24.127398 systemd[1]: Started sshd@19-172.24.4.32:22-172.24.4.1:36644.service - OpenSSH per-connection server daemon (172.24.4.1:36644). Jul 7 01:46:24.131106 systemd-logind[1449]: Removed session 21. Jul 7 01:46:25.439642 sshd[6439]: Accepted publickey for core from 172.24.4.1 port 36644 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:25.444260 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:25.455498 systemd-logind[1449]: New session 22 of user core. Jul 7 01:46:25.461555 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 01:46:27.773445 sshd[6439]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:27.782204 systemd[1]: sshd@19-172.24.4.32:22-172.24.4.1:36644.service: Deactivated successfully. Jul 7 01:46:27.788835 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 01:46:27.795481 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jul 7 01:46:27.807697 systemd[1]: Started sshd@20-172.24.4.32:22-172.24.4.1:36646.service - OpenSSH per-connection server daemon (172.24.4.1:36646). Jul 7 01:46:27.813126 systemd-logind[1449]: Removed session 22. Jul 7 01:46:29.048268 sshd[6485]: Accepted publickey for core from 172.24.4.1 port 36646 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:29.050895 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:29.058985 systemd-logind[1449]: New session 23 of user core. Jul 7 01:46:29.066488 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 01:46:30.349011 sshd[6485]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:30.359578 systemd[1]: sshd@20-172.24.4.32:22-172.24.4.1:36646.service: Deactivated successfully. Jul 7 01:46:30.362584 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 01:46:30.365816 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jul 7 01:46:30.380903 systemd[1]: Started sshd@21-172.24.4.32:22-172.24.4.1:36652.service - OpenSSH per-connection server daemon (172.24.4.1:36652). Jul 7 01:46:30.382839 systemd-logind[1449]: Removed session 23. Jul 7 01:46:31.704963 sshd[6498]: Accepted publickey for core from 172.24.4.1 port 36652 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:31.707672 sshd[6498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:31.740304 systemd-logind[1449]: New session 24 of user core. Jul 7 01:46:31.744788 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 01:46:32.486931 sshd[6498]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:32.493326 systemd[1]: sshd@21-172.24.4.32:22-172.24.4.1:36652.service: Deactivated successfully. Jul 7 01:46:32.497007 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 01:46:32.503258 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jul 7 01:46:32.506553 systemd-logind[1449]: Removed session 24. Jul 7 01:46:37.514618 systemd[1]: Started sshd@22-172.24.4.32:22-172.24.4.1:35418.service - OpenSSH per-connection server daemon (172.24.4.1:35418). Jul 7 01:46:38.698121 sshd[6534]: Accepted publickey for core from 172.24.4.1 port 35418 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:38.700842 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:38.718420 systemd-logind[1449]: New session 25 of user core. Jul 7 01:46:38.720557 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 01:46:39.473922 sshd[6534]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:39.496127 systemd[1]: sshd@22-172.24.4.32:22-172.24.4.1:35418.service: Deactivated successfully. Jul 7 01:46:39.505118 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 01:46:39.508199 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Jul 7 01:46:39.512176 systemd-logind[1449]: Removed session 25. Jul 7 01:46:44.503726 systemd[1]: Started sshd@23-172.24.4.32:22-172.24.4.1:46938.service - OpenSSH per-connection server daemon (172.24.4.1:46938). Jul 7 01:46:45.685824 sshd[6570]: Accepted publickey for core from 172.24.4.1 port 46938 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:45.688596 sshd[6570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:45.697179 systemd-logind[1449]: New session 26 of user core. Jul 7 01:46:45.703771 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 01:46:46.613717 sshd[6570]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:46.620190 systemd[1]: sshd@23-172.24.4.32:22-172.24.4.1:46938.service: Deactivated successfully. Jul 7 01:46:46.623265 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 01:46:46.638425 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Jul 7 01:46:46.642067 systemd-logind[1449]: Removed session 26. Jul 7 01:46:51.642310 systemd[1]: Started sshd@24-172.24.4.32:22-172.24.4.1:46952.service - OpenSSH per-connection server daemon (172.24.4.1:46952). Jul 7 01:46:53.001152 sshd[6604]: Accepted publickey for core from 172.24.4.1 port 46952 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:53.003210 sshd[6604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:53.014794 systemd-logind[1449]: New session 27 of user core. Jul 7 01:46:53.028897 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 01:46:53.876704 sshd[6604]: pam_unix(sshd:session): session closed for user core Jul 7 01:46:53.883985 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Jul 7 01:46:53.886151 systemd[1]: sshd@24-172.24.4.32:22-172.24.4.1:46952.service: Deactivated successfully. Jul 7 01:46:53.899575 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 01:46:53.907061 systemd-logind[1449]: Removed session 27. Jul 7 01:46:58.900391 systemd[1]: Started sshd@25-172.24.4.32:22-172.24.4.1:49338.service - OpenSSH per-connection server daemon (172.24.4.1:49338). Jul 7 01:46:59.919594 sshd[6639]: Accepted publickey for core from 172.24.4.1 port 49338 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:46:59.921322 sshd[6639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:46:59.931726 systemd-logind[1449]: New session 28 of user core. Jul 7 01:46:59.934541 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 01:47:00.529239 sshd[6639]: pam_unix(sshd:session): session closed for user core Jul 7 01:47:00.532747 systemd-logind[1449]: Session 28 logged out. Waiting for processes to exit. Jul 7 01:47:00.535831 systemd[1]: sshd@25-172.24.4.32:22-172.24.4.1:49338.service: Deactivated successfully. Jul 7 01:47:00.539002 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 01:47:00.543973 systemd-logind[1449]: Removed session 28. Jul 7 01:47:05.556832 systemd[1]: Started sshd@26-172.24.4.32:22-172.24.4.1:56782.service - OpenSSH per-connection server daemon (172.24.4.1:56782). Jul 7 01:47:06.726994 sshd[6654]: Accepted publickey for core from 172.24.4.1 port 56782 ssh2: RSA SHA256:XsuxnznZdnYTCwjlq8QZL4w9hmgGain8XiwOK8NYnOI Jul 7 01:47:06.733823 sshd[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:47:06.753791 systemd-logind[1449]: New session 29 of user core. Jul 7 01:47:06.763676 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 01:47:07.502810 sshd[6654]: pam_unix(sshd:session): session closed for user core Jul 7 01:47:07.510752 systemd[1]: sshd@26-172.24.4.32:22-172.24.4.1:56782.service: Deactivated successfully. Jul 7 01:47:07.517100 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 01:47:07.523887 systemd-logind[1449]: Session 29 logged out. Waiting for processes to exit. Jul 7 01:47:07.526784 systemd-logind[1449]: Removed session 29.