May 8 05:43:07.085299 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 05:43:07.085326 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:43:07.085336 kernel: BIOS-provided physical RAM map: May 8 05:43:07.085344 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 05:43:07.085351 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 05:43:07.085361 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 05:43:07.085369 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 8 05:43:07.085377 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 8 05:43:07.085384 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 05:43:07.085391 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 05:43:07.085398 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 8 05:43:07.085406 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 05:43:07.085413 kernel: NX (Execute Disable) protection: active May 8 05:43:07.085420 kernel: APIC: Static calls initialized May 8 05:43:07.085431 kernel: SMBIOS 3.0.0 present. May 8 05:43:07.085439 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 8 05:43:07.085447 kernel: Hypervisor detected: KVM May 8 05:43:07.085455 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 05:43:07.085462 kernel: kvm-clock: using sched offset of 3450930254 cycles May 8 05:43:07.085472 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 05:43:07.085480 kernel: tsc: Detected 1996.249 MHz processor May 8 05:43:07.085488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 05:43:07.085496 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 05:43:07.085504 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 8 05:43:07.085512 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 05:43:07.085520 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 05:43:07.085528 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 8 05:43:07.085536 kernel: ACPI: Early table checksum verification disabled May 8 05:43:07.085545 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 8 05:43:07.085553 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:43:07.085561 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:43:07.085569 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:43:07.085577 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 8 05:43:07.085585 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:43:07.085593 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:43:07.085600 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 8 05:43:07.085608 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 8 05:43:07.085618 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 8 05:43:07.085625 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 8 05:43:07.085633 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 8 05:43:07.085644 kernel: No NUMA configuration found May 8 05:43:07.085652 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 8 05:43:07.085660 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 8 05:43:07.085670 kernel: Zone ranges: May 8 05:43:07.085678 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 05:43:07.085686 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 05:43:07.085695 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 8 05:43:07.085703 kernel: Movable zone start for each node May 8 05:43:07.085711 kernel: Early memory node ranges May 8 05:43:07.085736 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 05:43:07.085745 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 8 05:43:07.085756 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 8 05:43:07.085764 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 8 05:43:07.085772 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 05:43:07.085780 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 05:43:07.085788 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 05:43:07.085796 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 05:43:07.085805 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 05:43:07.085813 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 05:43:07.085821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 05:43:07.085831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 05:43:07.085840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 05:43:07.085848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 05:43:07.085856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 05:43:07.085864 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 05:43:07.085874 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 05:43:07.085883 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 05:43:07.085892 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 8 05:43:07.085901 kernel: Booting paravirtualized kernel on KVM May 8 05:43:07.085912 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 05:43:07.085920 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 05:43:07.085929 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 05:43:07.085938 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 05:43:07.085946 kernel: pcpu-alloc: [0] 0 1 May 8 05:43:07.085955 kernel: kvm-guest: PV spinlocks disabled, no host support May 8 05:43:07.085965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:43:07.085975 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 05:43:07.085986 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 05:43:07.085994 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 05:43:07.086003 kernel: Fallback order for Node 0: 0 May 8 05:43:07.086012 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 05:43:07.086020 kernel: Policy zone: Normal May 8 05:43:07.086029 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 05:43:07.086038 kernel: software IO TLB: area num 2. May 8 05:43:07.086047 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 227308K reserved, 0K cma-reserved) May 8 05:43:07.086056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 05:43:07.086066 kernel: ftrace: allocating 37944 entries in 149 pages May 8 05:43:07.086075 kernel: ftrace: allocated 149 pages with 4 groups May 8 05:43:07.086084 kernel: Dynamic Preempt: voluntary May 8 05:43:07.086093 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 05:43:07.086102 kernel: rcu: RCU event tracing is enabled. May 8 05:43:07.086111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 05:43:07.086120 kernel: Trampoline variant of Tasks RCU enabled. May 8 05:43:07.086129 kernel: Rude variant of Tasks RCU enabled. May 8 05:43:07.086138 kernel: Tracing variant of Tasks RCU enabled. May 8 05:43:07.086146 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 05:43:07.086158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 05:43:07.086166 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 05:43:07.086175 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 05:43:07.086184 kernel: Console: colour VGA+ 80x25 May 8 05:43:07.086193 kernel: printk: console [tty0] enabled May 8 05:43:07.086201 kernel: printk: console [ttyS0] enabled May 8 05:43:07.086210 kernel: ACPI: Core revision 20230628 May 8 05:43:07.086219 kernel: APIC: Switch to symmetric I/O mode setup May 8 05:43:07.086227 kernel: x2apic enabled May 8 05:43:07.086238 kernel: APIC: Switched APIC routing to: physical x2apic May 8 05:43:07.086246 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 05:43:07.086255 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 05:43:07.086264 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 8 05:43:07.086273 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 8 05:43:07.086281 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 8 05:43:07.086290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 05:43:07.086299 kernel: Spectre V2 : Mitigation: Retpolines May 8 05:43:07.086307 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 05:43:07.086318 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 05:43:07.086327 kernel: Speculative Store Bypass: Vulnerable May 8 05:43:07.086335 kernel: x86/fpu: x87 FPU will use FXSAVE May 8 05:43:07.086344 kernel: Freeing SMP alternatives memory: 32K May 8 05:43:07.086359 kernel: pid_max: default: 32768 minimum: 301 May 8 05:43:07.086370 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 05:43:07.086379 kernel: landlock: Up and running. May 8 05:43:07.086388 kernel: SELinux: Initializing. May 8 05:43:07.086397 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 05:43:07.086407 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 05:43:07.086416 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 8 05:43:07.086427 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:43:07.086437 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:43:07.086446 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:43:07.086456 kernel: Performance Events: AMD PMU driver. May 8 05:43:07.086465 kernel: ... version: 0 May 8 05:43:07.086476 kernel: ... bit width: 48 May 8 05:43:07.086485 kernel: ... generic registers: 4 May 8 05:43:07.086494 kernel: ... value mask: 0000ffffffffffff May 8 05:43:07.086503 kernel: ... max period: 00007fffffffffff May 8 05:43:07.086512 kernel: ... fixed-purpose events: 0 May 8 05:43:07.086521 kernel: ... event mask: 000000000000000f May 8 05:43:07.086530 kernel: signal: max sigframe size: 1440 May 8 05:43:07.086539 kernel: rcu: Hierarchical SRCU implementation. May 8 05:43:07.086549 kernel: rcu: Max phase no-delay instances is 400. May 8 05:43:07.086560 kernel: smp: Bringing up secondary CPUs ... May 8 05:43:07.086569 kernel: smpboot: x86: Booting SMP configuration: May 8 05:43:07.086578 kernel: .... node #0, CPUs: #1 May 8 05:43:07.086587 kernel: smp: Brought up 1 node, 2 CPUs May 8 05:43:07.086596 kernel: smpboot: Max logical packages: 2 May 8 05:43:07.086605 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 8 05:43:07.086615 kernel: devtmpfs: initialized May 8 05:43:07.086624 kernel: x86/mm: Memory block size: 128MB May 8 05:43:07.086633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 05:43:07.086642 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 05:43:07.086653 kernel: pinctrl core: initialized pinctrl subsystem May 8 05:43:07.086662 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 05:43:07.086671 kernel: audit: initializing netlink subsys (disabled) May 8 05:43:07.086680 kernel: audit: type=2000 audit(1746682986.270:1): state=initialized audit_enabled=0 res=1 May 8 05:43:07.086690 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 05:43:07.086699 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 05:43:07.086708 kernel: cpuidle: using governor menu May 8 05:43:07.086717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 05:43:07.086742 kernel: dca service started, version 1.12.1 May 8 05:43:07.086754 kernel: PCI: Using configuration type 1 for base access May 8 05:43:07.086763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 05:43:07.086773 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 05:43:07.086782 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 05:43:07.086791 kernel: ACPI: Added _OSI(Module Device) May 8 05:43:07.086800 kernel: ACPI: Added _OSI(Processor Device) May 8 05:43:07.086810 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 05:43:07.086819 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 05:43:07.086828 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 05:43:07.086839 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 05:43:07.086848 kernel: ACPI: Interpreter enabled May 8 05:43:07.086857 kernel: ACPI: PM: (supports S0 S3 S5) May 8 05:43:07.086866 kernel: ACPI: Using IOAPIC for interrupt routing May 8 05:43:07.086875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 05:43:07.086884 kernel: PCI: Using E820 reservations for host bridge windows May 8 05:43:07.086894 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 8 05:43:07.086903 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 05:43:07.087045 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 8 05:43:07.087598 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 8 05:43:07.087706 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 8 05:43:07.087744 kernel: acpiphp: Slot [3] registered May 8 05:43:07.087754 kernel: acpiphp: Slot [4] registered May 8 05:43:07.087763 kernel: acpiphp: Slot [5] registered May 8 05:43:07.087772 kernel: acpiphp: Slot [6] registered May 8 05:43:07.087781 kernel: acpiphp: Slot [7] registered May 8 05:43:07.087795 kernel: acpiphp: Slot [8] registered May 8 05:43:07.087804 kernel: acpiphp: Slot [9] registered May 8 05:43:07.087813 kernel: acpiphp: Slot [10] registered May 8 05:43:07.087822 kernel: acpiphp: Slot [11] registered May 8 05:43:07.087831 kernel: acpiphp: Slot [12] registered May 8 05:43:07.087840 kernel: acpiphp: Slot [13] registered May 8 05:43:07.087849 kernel: acpiphp: Slot [14] registered May 8 05:43:07.087858 kernel: acpiphp: Slot [15] registered May 8 05:43:07.087867 kernel: acpiphp: Slot [16] registered May 8 05:43:07.087878 kernel: acpiphp: Slot [17] registered May 8 05:43:07.087887 kernel: acpiphp: Slot [18] registered May 8 05:43:07.087896 kernel: acpiphp: Slot [19] registered May 8 05:43:07.087905 kernel: acpiphp: Slot [20] registered May 8 05:43:07.087914 kernel: acpiphp: Slot [21] registered May 8 05:43:07.087923 kernel: acpiphp: Slot [22] registered May 8 05:43:07.087932 kernel: acpiphp: Slot [23] registered May 8 05:43:07.087941 kernel: acpiphp: Slot [24] registered May 8 05:43:07.087950 kernel: acpiphp: Slot [25] registered May 8 05:43:07.087958 kernel: acpiphp: Slot [26] registered May 8 05:43:07.087969 kernel: acpiphp: Slot [27] registered May 8 05:43:07.087978 kernel: acpiphp: Slot [28] registered May 8 05:43:07.087987 kernel: acpiphp: Slot [29] registered May 8 05:43:07.087996 kernel: acpiphp: Slot [30] registered May 8 05:43:07.088005 kernel: acpiphp: Slot [31] registered May 8 05:43:07.088014 kernel: PCI host bridge to bus 0000:00 May 8 05:43:07.088118 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 05:43:07.088207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 05:43:07.088299 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 05:43:07.088388 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 05:43:07.088473 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 8 05:43:07.088553 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 05:43:07.088664 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 8 05:43:07.088799 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 8 05:43:07.088915 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 8 05:43:07.089015 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 8 05:43:07.089113 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 05:43:07.089211 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 05:43:07.089309 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 05:43:07.089406 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 05:43:07.089512 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 8 05:43:07.089614 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 8 05:43:07.089712 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 8 05:43:07.089839 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 8 05:43:07.089940 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 8 05:43:07.090038 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 8 05:43:07.090136 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 8 05:43:07.090233 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 8 05:43:07.090336 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 05:43:07.090439 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 05:43:07.090550 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 8 05:43:07.090648 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 8 05:43:07.090760 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 8 05:43:07.090855 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 8 05:43:07.090964 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 8 05:43:07.091062 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 8 05:43:07.091154 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 8 05:43:07.091246 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 8 05:43:07.091358 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 8 05:43:07.091453 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 8 05:43:07.091551 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 8 05:43:07.091657 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 8 05:43:07.091783 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 8 05:43:07.091883 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 8 05:43:07.091982 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 8 05:43:07.091996 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 05:43:07.092006 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 05:43:07.092015 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 05:43:07.092024 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 05:43:07.092034 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 8 05:43:07.092050 kernel: iommu: Default domain type: Translated May 8 05:43:07.092060 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 05:43:07.092069 kernel: PCI: Using ACPI for IRQ routing May 8 05:43:07.092078 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 05:43:07.092087 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 05:43:07.092096 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 8 05:43:07.092192 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 8 05:43:07.092291 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 8 05:43:07.092395 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 05:43:07.092410 kernel: vgaarb: loaded May 8 05:43:07.092419 kernel: clocksource: Switched to clocksource kvm-clock May 8 05:43:07.092428 kernel: VFS: Disk quotas dquot_6.6.0 May 8 05:43:07.092436 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 05:43:07.092445 kernel: pnp: PnP ACPI init May 8 05:43:07.092539 kernel: pnp 00:03: [dma 2] May 8 05:43:07.092553 kernel: pnp: PnP ACPI: found 5 devices May 8 05:43:07.092562 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 05:43:07.092574 kernel: NET: Registered PF_INET protocol family May 8 05:43:07.092583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 05:43:07.092591 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 05:43:07.092600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 05:43:07.092609 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 05:43:07.092617 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 05:43:07.092626 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 05:43:07.092634 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 05:43:07.092645 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 05:43:07.092653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 05:43:07.092662 kernel: NET: Registered PF_XDP protocol family May 8 05:43:07.092760 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 05:43:07.092843 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 05:43:07.092922 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 05:43:07.093002 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 8 05:43:07.093081 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 8 05:43:07.093177 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 8 05:43:07.093274 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 05:43:07.093288 kernel: PCI: CLS 0 bytes, default 64 May 8 05:43:07.093297 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 05:43:07.093306 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 8 05:43:07.093315 kernel: Initialise system trusted keyrings May 8 05:43:07.093323 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 05:43:07.093332 kernel: Key type asymmetric registered May 8 05:43:07.093354 kernel: Asymmetric key parser 'x509' registered May 8 05:43:07.093376 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 05:43:07.093416 kernel: io scheduler mq-deadline registered May 8 05:43:07.093448 kernel: io scheduler kyber registered May 8 05:43:07.093465 kernel: io scheduler bfq registered May 8 05:43:07.093474 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 05:43:07.093483 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 8 05:43:07.093492 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 8 05:43:07.093501 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 8 05:43:07.093510 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 8 05:43:07.093521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 05:43:07.093530 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 05:43:07.093538 kernel: random: crng init done May 8 05:43:07.093547 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 05:43:07.093556 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 05:43:07.093564 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 05:43:07.093661 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 05:43:07.093675 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 05:43:07.094317 kernel: rtc_cmos 00:04: registered as rtc0 May 8 05:43:07.094409 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T05:43:06 UTC (1746682986) May 8 05:43:07.094491 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 05:43:07.094504 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 05:43:07.094512 kernel: NET: Registered PF_INET6 protocol family May 8 05:43:07.094521 kernel: Segment Routing with IPv6 May 8 05:43:07.094530 kernel: In-situ OAM (IOAM) with IPv6 May 8 05:43:07.094538 kernel: NET: Registered PF_PACKET protocol family May 8 05:43:07.094547 kernel: Key type dns_resolver registered May 8 05:43:07.094559 kernel: IPI shorthand broadcast: enabled May 8 05:43:07.094567 kernel: sched_clock: Marking stable (1041016110, 171626114)->(1250169112, -37526888) May 8 05:43:07.094576 kernel: registered taskstats version 1 May 8 05:43:07.094585 kernel: Loading compiled-in X.509 certificates May 8 05:43:07.094593 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 05:43:07.094602 kernel: Key type .fscrypt registered May 8 05:43:07.094610 kernel: Key type fscrypt-provisioning registered May 8 05:43:07.094619 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 05:43:07.094628 kernel: ima: Allocated hash algorithm: sha1 May 8 05:43:07.094639 kernel: ima: No architecture policies found May 8 05:43:07.094647 kernel: clk: Disabling unused clocks May 8 05:43:07.094656 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 05:43:07.094665 kernel: Write protecting the kernel read-only data: 36864k May 8 05:43:07.094674 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 05:43:07.094682 kernel: Run /init as init process May 8 05:43:07.094691 kernel: with arguments: May 8 05:43:07.094699 kernel: /init May 8 05:43:07.094707 kernel: with environment: May 8 05:43:07.094765 kernel: HOME=/ May 8 05:43:07.094777 kernel: TERM=linux May 8 05:43:07.094785 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 05:43:07.094797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 05:43:07.094809 systemd[1]: Detected virtualization kvm. May 8 05:43:07.094818 systemd[1]: Detected architecture x86-64. May 8 05:43:07.094827 systemd[1]: Running in initrd. May 8 05:43:07.094840 systemd[1]: No hostname configured, using default hostname. May 8 05:43:07.094849 systemd[1]: Hostname set to . May 8 05:43:07.094859 systemd[1]: Initializing machine ID from VM UUID. May 8 05:43:07.094868 systemd[1]: Queued start job for default target initrd.target. May 8 05:43:07.094877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:43:07.094887 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:43:07.094897 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 05:43:07.094915 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 05:43:07.095225 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 05:43:07.095235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 05:43:07.095247 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 05:43:07.095257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 05:43:07.095269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:43:07.095279 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 05:43:07.095288 systemd[1]: Reached target paths.target - Path Units. May 8 05:43:07.095298 systemd[1]: Reached target slices.target - Slice Units. May 8 05:43:07.095319 systemd[1]: Reached target swap.target - Swaps. May 8 05:43:07.095329 systemd[1]: Reached target timers.target - Timer Units. May 8 05:43:07.095339 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 05:43:07.095348 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 05:43:07.095358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 05:43:07.095370 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 05:43:07.095380 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 05:43:07.095389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 05:43:07.095399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:43:07.095409 systemd[1]: Reached target sockets.target - Socket Units. May 8 05:43:07.095418 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 05:43:07.095428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 05:43:07.095438 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 05:43:07.095447 systemd[1]: Starting systemd-fsck-usr.service... May 8 05:43:07.095459 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 05:43:07.095468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 05:43:07.095478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:43:07.095506 systemd-journald[184]: Collecting audit messages is disabled. May 8 05:43:07.095531 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 05:43:07.095541 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:43:07.095552 systemd-journald[184]: Journal started May 8 05:43:07.095577 systemd-journald[184]: Runtime Journal (/run/log/journal/3c1fcb9c7112447db5bffd6a89209c41) is 8.0M, max 78.3M, 70.3M free. May 8 05:43:07.099765 systemd[1]: Started systemd-journald.service - Journal Service. May 8 05:43:07.100479 systemd[1]: Finished systemd-fsck-usr.service. May 8 05:43:07.101883 systemd-modules-load[185]: Inserted module 'overlay' May 8 05:43:07.112633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 05:43:07.163854 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 05:43:07.163880 kernel: Bridge firewalling registered May 8 05:43:07.131474 systemd-modules-load[185]: Inserted module 'br_netfilter' May 8 05:43:07.131902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 05:43:07.168387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 05:43:07.172209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:07.183006 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:43:07.188558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 05:43:07.193958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:43:07.195871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:43:07.198582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 05:43:07.210388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 05:43:07.220861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 05:43:07.221621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:43:07.226040 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 05:43:07.238320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:43:07.251428 dracut-cmdline[221]: dracut-dracut-053 May 8 05:43:07.256240 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:43:07.260337 systemd-resolved[215]: Positive Trust Anchors: May 8 05:43:07.260356 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 05:43:07.260401 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 05:43:07.263808 systemd-resolved[215]: Defaulting to hostname 'linux'. May 8 05:43:07.264686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 05:43:07.265520 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 05:43:07.334817 kernel: SCSI subsystem initialized May 8 05:43:07.345781 kernel: Loading iSCSI transport class v2.0-870. May 8 05:43:07.357797 kernel: iscsi: registered transport (tcp) May 8 05:43:07.380298 kernel: iscsi: registered transport (qla4xxx) May 8 05:43:07.380358 kernel: QLogic iSCSI HBA Driver May 8 05:43:07.434503 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 05:43:07.446882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 05:43:07.478303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 05:43:07.478384 kernel: device-mapper: uevent: version 1.0.3 May 8 05:43:07.479021 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 05:43:07.535850 kernel: raid6: sse2x4 gen() 5176 MB/s May 8 05:43:07.552813 kernel: raid6: sse2x2 gen() 7848 MB/s May 8 05:43:07.571106 kernel: raid6: sse2x1 gen() 10160 MB/s May 8 05:43:07.571159 kernel: raid6: using algorithm sse2x1 gen() 10160 MB/s May 8 05:43:07.590281 kernel: raid6: .... xor() 7404 MB/s, rmw enabled May 8 05:43:07.590338 kernel: raid6: using ssse3x2 recovery algorithm May 8 05:43:07.611983 kernel: xor: measuring software checksum speed May 8 05:43:07.612063 kernel: prefetch64-sse : 17256 MB/sec May 8 05:43:07.615386 kernel: generic_sse : 15499 MB/sec May 8 05:43:07.615447 kernel: xor: using function: prefetch64-sse (17256 MB/sec) May 8 05:43:07.793789 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 05:43:07.806409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 05:43:07.815006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:43:07.828856 systemd-udevd[404]: Using default interface naming scheme 'v255'. May 8 05:43:07.833557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:43:07.843011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 05:43:07.860343 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation May 8 05:43:07.897814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 05:43:07.908008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 05:43:07.969157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:43:07.980118 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 05:43:08.023972 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 05:43:08.028511 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 05:43:08.031347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:43:08.034450 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 05:43:08.041905 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 05:43:08.056000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 05:43:08.070815 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 8 05:43:08.098877 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 8 05:43:08.099017 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 05:43:08.099032 kernel: GPT:17805311 != 20971519 May 8 05:43:08.099043 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 05:43:08.099055 kernel: GPT:17805311 != 20971519 May 8 05:43:08.099065 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 05:43:08.099076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:43:08.091172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 05:43:08.091346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:43:08.106369 kernel: libata version 3.00 loaded. May 8 05:43:08.106389 kernel: ata_piix 0000:00:01.1: version 2.13 May 8 05:43:08.134929 kernel: scsi host0: ata_piix May 8 05:43:08.135064 kernel: scsi host1: ata_piix May 8 05:43:08.135176 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) May 8 05:43:08.135196 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 8 05:43:08.135208 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 8 05:43:08.092751 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:43:08.192302 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (466) May 8 05:43:08.093315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 05:43:08.093483 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:08.094019 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:43:08.103994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:43:08.160832 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 05:43:08.193054 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:08.199838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 05:43:08.205012 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 05:43:08.205624 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 05:43:08.212716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 05:43:08.225978 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 05:43:08.231940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:43:08.237218 disk-uuid[504]: Primary Header is updated. May 8 05:43:08.237218 disk-uuid[504]: Secondary Entries is updated. May 8 05:43:08.237218 disk-uuid[504]: Secondary Header is updated. May 8 05:43:08.245737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:43:08.252784 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:43:08.265427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:43:09.269855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:43:09.271610 disk-uuid[506]: The operation has completed successfully. May 8 05:43:09.352340 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 05:43:09.352863 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 05:43:09.374846 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 05:43:09.388206 sh[528]: Success May 8 05:43:09.406811 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 8 05:43:09.476371 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 05:43:09.496893 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 05:43:09.498211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 05:43:09.540781 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 05:43:09.540886 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 05:43:09.540918 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 05:43:09.546554 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 05:43:09.550357 kernel: BTRFS info (device dm-0): using free space tree May 8 05:43:09.572046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 05:43:09.574323 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 05:43:09.581066 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 05:43:09.589100 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 05:43:09.618793 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:43:09.626985 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:43:09.627047 kernel: BTRFS info (device vda6): using free space tree May 8 05:43:09.638833 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:43:09.659945 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 05:43:09.665836 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:43:09.680431 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 05:43:09.693089 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 05:43:09.722800 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 05:43:09.729916 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 05:43:09.753669 systemd-networkd[710]: lo: Link UP May 8 05:43:09.753678 systemd-networkd[710]: lo: Gained carrier May 8 05:43:09.755316 systemd-networkd[710]: Enumeration completed May 8 05:43:09.755415 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 05:43:09.756328 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:43:09.756332 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 05:43:09.757607 systemd[1]: Reached target network.target - Network. May 8 05:43:09.757981 systemd-networkd[710]: eth0: Link UP May 8 05:43:09.757984 systemd-networkd[710]: eth0: Gained carrier May 8 05:43:09.757992 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:43:09.774768 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.135/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 05:43:09.838679 ignition[653]: Ignition 2.19.0 May 8 05:43:09.838699 ignition[653]: Stage: fetch-offline May 8 05:43:09.841037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 05:43:09.838774 ignition[653]: no configs at "/usr/lib/ignition/base.d" May 8 05:43:09.838790 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:09.842536 systemd-resolved[215]: Detected conflict on linux IN A 172.24.4.135 May 8 05:43:09.838922 ignition[653]: parsed url from cmdline: "" May 8 05:43:09.842545 systemd-resolved[215]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 8 05:43:09.838927 ignition[653]: no config URL provided May 8 05:43:09.838933 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" May 8 05:43:09.838943 ignition[653]: no config at "/usr/lib/ignition/user.ign" May 8 05:43:09.838948 ignition[653]: failed to fetch config: resource requires networking May 8 05:43:09.839780 ignition[653]: Ignition finished successfully May 8 05:43:09.848462 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 05:43:09.859996 ignition[718]: Ignition 2.19.0 May 8 05:43:09.860009 ignition[718]: Stage: fetch May 8 05:43:09.860196 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 8 05:43:09.860209 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:09.860317 ignition[718]: parsed url from cmdline: "" May 8 05:43:09.860322 ignition[718]: no config URL provided May 8 05:43:09.860328 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 8 05:43:09.860338 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 8 05:43:09.860503 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 8 05:43:09.860601 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 8 05:43:09.860641 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 8 05:43:10.183502 ignition[718]: GET result: OK May 8 05:43:10.183716 ignition[718]: parsing config with SHA512: 9756ae3e44504ed8f5e7ad6131ec1ae601ccbf017231cedba864fc65b080fe4d58f61b5cbce9a76d0174fc837a97a2e9f1b34f71083b1d39e738edaee6b1e8c8 May 8 05:43:10.193250 unknown[718]: fetched base config from "system" May 8 05:43:10.193276 unknown[718]: fetched base config from "system" May 8 05:43:10.194229 ignition[718]: fetch: fetch complete May 8 05:43:10.193290 unknown[718]: fetched user config from "openstack" May 8 05:43:10.194241 ignition[718]: fetch: fetch passed May 8 05:43:10.197980 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 05:43:10.194328 ignition[718]: Ignition finished successfully May 8 05:43:10.207148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 05:43:10.244810 ignition[724]: Ignition 2.19.0 May 8 05:43:10.244836 ignition[724]: Stage: kargs May 8 05:43:10.245230 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 8 05:43:10.245257 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:10.249851 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 05:43:10.247554 ignition[724]: kargs: kargs passed May 8 05:43:10.247654 ignition[724]: Ignition finished successfully May 8 05:43:10.261044 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 05:43:10.292921 ignition[730]: Ignition 2.19.0 May 8 05:43:10.294775 ignition[730]: Stage: disks May 8 05:43:10.296402 ignition[730]: no configs at "/usr/lib/ignition/base.d" May 8 05:43:10.296429 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:10.299985 ignition[730]: disks: disks passed May 8 05:43:10.300129 ignition[730]: Ignition finished successfully May 8 05:43:10.302147 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 05:43:10.305170 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 05:43:10.306654 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 05:43:10.309691 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 05:43:10.312690 systemd[1]: Reached target sysinit.target - System Initialization. May 8 05:43:10.315255 systemd[1]: Reached target basic.target - Basic System. May 8 05:43:10.326963 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 05:43:10.356853 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 05:43:10.368796 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 05:43:10.375886 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 05:43:10.516770 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 05:43:10.517714 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 05:43:10.518783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 05:43:10.526928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 05:43:10.530195 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 05:43:10.533233 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 05:43:10.536398 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 8 05:43:10.557139 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (746) May 8 05:43:10.557190 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:43:10.557220 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:43:10.557247 kernel: BTRFS info (device vda6): using free space tree May 8 05:43:10.540538 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 05:43:10.540569 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 05:43:10.558962 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 05:43:10.573410 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:43:10.568856 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 05:43:10.573404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 05:43:10.692435 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory May 8 05:43:10.701366 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory May 8 05:43:10.706911 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory May 8 05:43:10.712620 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory May 8 05:43:10.810691 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 05:43:10.819841 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 05:43:10.822163 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 05:43:10.827575 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 05:43:10.829761 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:43:10.852973 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 05:43:10.857672 ignition[865]: INFO : Ignition 2.19.0 May 8 05:43:10.857672 ignition[865]: INFO : Stage: mount May 8 05:43:10.858922 ignition[865]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:43:10.858922 ignition[865]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:10.861851 ignition[865]: INFO : mount: mount passed May 8 05:43:10.861851 ignition[865]: INFO : Ignition finished successfully May 8 05:43:10.861037 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 05:43:10.923887 systemd-networkd[710]: eth0: Gained IPv6LL May 8 05:43:17.791982 coreos-metadata[748]: May 08 05:43:17.791 WARN failed to locate config-drive, using the metadata service API instead May 8 05:43:17.832584 coreos-metadata[748]: May 08 05:43:17.832 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 05:43:17.848111 coreos-metadata[748]: May 08 05:43:17.848 INFO Fetch successful May 8 05:43:17.849633 coreos-metadata[748]: May 08 05:43:17.848 INFO wrote hostname ci-4081-3-3-n-fbb7d486d2.novalocal to /sysroot/etc/hostname May 8 05:43:17.852309 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 8 05:43:17.852536 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 8 05:43:17.865008 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 05:43:17.901345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 05:43:17.918852 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (881) May 8 05:43:17.920822 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:43:17.925997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:43:17.930194 kernel: BTRFS info (device vda6): using free space tree May 8 05:43:17.941841 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:43:17.946361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 05:43:17.990092 ignition[899]: INFO : Ignition 2.19.0 May 8 05:43:17.990092 ignition[899]: INFO : Stage: files May 8 05:43:17.993175 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:43:17.993175 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:17.993175 ignition[899]: DEBUG : files: compiled without relabeling support, skipping May 8 05:43:17.993175 ignition[899]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 05:43:17.993175 ignition[899]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 05:43:18.003556 ignition[899]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 05:43:18.003556 ignition[899]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 05:43:18.003556 ignition[899]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 05:43:18.003556 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 05:43:18.003556 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 05:43:17.996890 unknown[899]: wrote ssh authorized keys file for user: core May 8 05:43:18.064805 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 05:43:18.368488 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 05:43:18.368488 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 05:43:18.373077 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 8 05:43:19.058808 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 05:43:20.537534 ignition[899]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 8 05:43:20.537534 ignition[899]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 8 05:43:20.545217 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 05:43:20.545217 ignition[899]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 05:43:20.545217 ignition[899]: INFO : files: files passed May 8 05:43:20.545217 ignition[899]: INFO : Ignition finished successfully May 8 05:43:20.541137 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 05:43:20.552037 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 05:43:20.560958 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 05:43:20.569316 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 05:43:20.569499 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 05:43:20.575254 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 05:43:20.576898 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 05:43:20.576898 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 05:43:20.578241 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 05:43:20.581027 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 05:43:20.586944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 05:43:20.618357 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 05:43:20.618563 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 05:43:20.629085 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 05:43:20.630963 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 05:43:20.633008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 05:43:20.644850 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 05:43:20.658601 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 05:43:20.663984 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 05:43:20.679113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 05:43:20.681773 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:43:20.684347 systemd[1]: Stopped target timers.target - Timer Units. May 8 05:43:20.686383 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 05:43:20.686863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 05:43:20.689172 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 05:43:20.690647 systemd[1]: Stopped target basic.target - Basic System. May 8 05:43:20.692324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 05:43:20.694074 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 05:43:20.696022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 05:43:20.698007 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 05:43:20.700083 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 05:43:20.702089 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 05:43:20.703636 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 05:43:20.704652 systemd[1]: Stopped target swap.target - Swaps. May 8 05:43:20.705586 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 05:43:20.705706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 05:43:20.706955 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 05:43:20.707710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:43:20.708836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 05:43:20.709197 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:43:20.710115 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 05:43:20.710225 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 05:43:20.711738 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 05:43:20.711869 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 05:43:20.712578 systemd[1]: ignition-files.service: Deactivated successfully. May 8 05:43:20.712697 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 05:43:20.724183 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 05:43:20.726937 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 05:43:20.727506 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 05:43:20.727678 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:43:20.730646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 05:43:20.730785 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 05:43:20.737857 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 05:43:20.738499 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 05:43:20.752848 ignition[951]: INFO : Ignition 2.19.0 May 8 05:43:20.755981 ignition[951]: INFO : Stage: umount May 8 05:43:20.758950 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:43:20.761547 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 05:43:20.763667 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:43:20.763667 ignition[951]: INFO : umount: umount passed May 8 05:43:20.763667 ignition[951]: INFO : Ignition finished successfully May 8 05:43:20.765077 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 05:43:20.765167 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 05:43:20.766745 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 05:43:20.766836 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 05:43:20.768844 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 05:43:20.768888 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 05:43:20.770579 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 05:43:20.770619 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 05:43:20.771169 systemd[1]: Stopped target network.target - Network. May 8 05:43:20.771941 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 05:43:20.771985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 05:43:20.773818 systemd[1]: Stopped target paths.target - Path Units. May 8 05:43:20.775649 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 05:43:20.779849 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:43:20.780503 systemd[1]: Stopped target slices.target - Slice Units. May 8 05:43:20.782380 systemd[1]: Stopped target sockets.target - Socket Units. May 8 05:43:20.784239 systemd[1]: iscsid.socket: Deactivated successfully. May 8 05:43:20.784283 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 05:43:20.786155 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 05:43:20.786191 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 05:43:20.788525 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 05:43:20.788572 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 05:43:20.790742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 05:43:20.790784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 05:43:20.793214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 05:43:20.797085 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 05:43:20.801132 systemd-networkd[710]: eth0: DHCPv6 lease lost May 8 05:43:20.803456 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 05:43:20.803573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 05:43:20.806224 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 05:43:20.806292 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 05:43:20.814848 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 05:43:20.815605 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 05:43:20.815663 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 05:43:20.816313 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:43:20.817047 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 05:43:20.817632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 05:43:20.823539 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 05:43:20.823606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 05:43:20.828311 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 05:43:20.828359 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 05:43:20.829302 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 05:43:20.829344 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:43:20.834941 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 05:43:20.835687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:43:20.837169 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 05:43:20.837525 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 05:43:20.839424 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 05:43:20.839480 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 05:43:20.841202 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 05:43:20.841235 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:43:20.841769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 05:43:20.841816 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 05:43:20.843445 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 05:43:20.843489 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 05:43:20.844594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 05:43:20.844635 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:43:20.850930 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 05:43:20.853050 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 05:43:20.853106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:43:20.854353 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 05:43:20.854396 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:43:20.856605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 05:43:20.856649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:43:20.858070 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 05:43:20.858113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:20.859995 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 05:43:20.860084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 05:43:21.072090 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 05:43:21.072326 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 05:43:21.076071 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 05:43:21.077783 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 05:43:21.077909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 05:43:21.098081 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 05:43:21.114158 systemd[1]: Switching root. May 8 05:43:21.165821 systemd-journald[184]: Journal stopped May 8 05:43:22.548127 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 8 05:43:22.548205 kernel: SELinux: policy capability network_peer_controls=1 May 8 05:43:22.548225 kernel: SELinux: policy capability open_perms=1 May 8 05:43:22.548247 kernel: SELinux: policy capability extended_socket_class=1 May 8 05:43:22.548263 kernel: SELinux: policy capability always_check_network=0 May 8 05:43:22.548276 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 05:43:22.548290 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 05:43:22.548303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 05:43:22.548316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 05:43:22.548329 kernel: audit: type=1403 audit(1746683001.560:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 05:43:22.548344 systemd[1]: Successfully loaded SELinux policy in 68.452ms. May 8 05:43:22.548360 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.696ms. May 8 05:43:22.548381 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 05:43:22.548396 systemd[1]: Detected virtualization kvm. May 8 05:43:22.548410 systemd[1]: Detected architecture x86-64. May 8 05:43:22.548424 systemd[1]: Detected first boot. May 8 05:43:22.548438 systemd[1]: Hostname set to . May 8 05:43:22.548452 systemd[1]: Initializing machine ID from VM UUID. May 8 05:43:22.548466 zram_generator::config[994]: No configuration found. May 8 05:43:22.548484 systemd[1]: Populated /etc with preset unit settings. May 8 05:43:22.548498 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 05:43:22.548513 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 05:43:22.548529 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 05:43:22.548543 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 05:43:22.548558 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 05:43:22.548572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 05:43:22.548585 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 05:43:22.548598 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 05:43:22.548613 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 05:43:22.548627 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 05:43:22.548640 systemd[1]: Created slice user.slice - User and Session Slice. May 8 05:43:22.548653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:43:22.548667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:43:22.548680 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 05:43:22.548693 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 05:43:22.548707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 05:43:22.548739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 05:43:22.548758 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 05:43:22.548771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:43:22.548785 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 05:43:22.548798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 05:43:22.548812 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 05:43:22.548825 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 05:43:22.548841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:43:22.548854 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 05:43:22.548869 systemd[1]: Reached target slices.target - Slice Units. May 8 05:43:22.548882 systemd[1]: Reached target swap.target - Swaps. May 8 05:43:22.548896 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 05:43:22.548909 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 05:43:22.548922 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 05:43:22.548936 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 05:43:22.548949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:43:22.548962 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 05:43:22.548978 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 05:43:22.548991 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 05:43:22.549005 systemd[1]: Mounting media.mount - External Media Directory... May 8 05:43:22.549018 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:43:22.549035 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 05:43:22.549048 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 05:43:22.549061 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 05:43:22.549075 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 05:43:22.549091 systemd[1]: Reached target machines.target - Containers. May 8 05:43:22.549104 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 05:43:22.549118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 05:43:22.549131 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 05:43:22.549145 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 05:43:22.549160 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 05:43:22.549174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 05:43:22.549187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 05:43:22.549203 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 05:43:22.549216 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 05:43:22.549231 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 05:43:22.549244 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 05:43:22.549258 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 05:43:22.549271 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 05:43:22.549284 systemd[1]: Stopped systemd-fsck-usr.service. May 8 05:43:22.549297 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 05:43:22.549309 kernel: loop: module loaded May 8 05:43:22.549324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 05:43:22.549337 kernel: fuse: init (API version 7.39) May 8 05:43:22.549350 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 05:43:22.549364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 05:43:22.549383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 05:43:22.549403 systemd[1]: verity-setup.service: Deactivated successfully. May 8 05:43:22.549424 systemd[1]: Stopped verity-setup.service. May 8 05:43:22.549439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:43:22.549452 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 05:43:22.549487 systemd-journald[1083]: Collecting audit messages is disabled. May 8 05:43:22.549513 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 05:43:22.549527 systemd[1]: Mounted media.mount - External Media Directory. May 8 05:43:22.549540 systemd-journald[1083]: Journal started May 8 05:43:22.549570 systemd-journald[1083]: Runtime Journal (/run/log/journal/3c1fcb9c7112447db5bffd6a89209c41) is 8.0M, max 78.3M, 70.3M free. May 8 05:43:22.233996 systemd[1]: Queued start job for default target multi-user.target. May 8 05:43:22.258888 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 05:43:22.259288 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 05:43:22.558389 systemd[1]: Started systemd-journald.service - Journal Service. May 8 05:43:22.555110 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 05:43:22.556615 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 05:43:22.557269 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 05:43:22.558793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:43:22.565902 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 05:43:22.566563 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 05:43:22.568528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 05:43:22.568790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 05:43:22.569970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 05:43:22.570319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 05:43:22.573621 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 05:43:22.573754 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 05:43:22.574455 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 05:43:22.574562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 05:43:22.575392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 05:43:22.576154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 05:43:22.577153 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 05:43:22.592717 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 05:43:22.598789 kernel: ACPI: bus type drm_connector registered May 8 05:43:22.601965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 05:43:22.604289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 05:43:22.604992 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 05:43:22.605090 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 05:43:22.606765 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 05:43:22.610859 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 05:43:22.614116 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 05:43:22.615197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 05:43:22.618039 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 05:43:22.619849 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 05:43:22.620591 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 05:43:22.627412 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 05:43:22.628186 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 05:43:22.635070 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 05:43:22.643928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 05:43:22.651656 systemd-journald[1083]: Time spent on flushing to /var/log/journal/3c1fcb9c7112447db5bffd6a89209c41 is 70.385ms for 941 entries. May 8 05:43:22.651656 systemd-journald[1083]: System Journal (/var/log/journal/3c1fcb9c7112447db5bffd6a89209c41) is 8.0M, max 584.8M, 576.8M free. May 8 05:43:22.762708 systemd-journald[1083]: Received client request to flush runtime journal. May 8 05:43:22.762770 kernel: loop0: detected capacity change from 0 to 8 May 8 05:43:22.762798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 05:43:22.653892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 05:43:22.657488 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 05:43:22.658390 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 05:43:22.658789 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 05:43:22.660215 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 05:43:22.661931 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 05:43:22.663816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 05:43:22.665046 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 05:43:22.676287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 05:43:22.683861 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 05:43:22.721013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:43:22.736520 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 05:43:22.739057 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 05:43:22.765352 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. May 8 05:43:22.765368 systemd-tmpfiles[1126]: ACLs are not supported, ignoring. May 8 05:43:22.766284 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 05:43:22.769767 kernel: loop1: detected capacity change from 0 to 142488 May 8 05:43:22.769855 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 05:43:22.772617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:43:22.785872 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 05:43:22.812715 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 05:43:22.820027 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 05:43:22.843166 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 05:43:22.848987 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 05:43:22.862775 kernel: loop2: detected capacity change from 0 to 140768 May 8 05:43:22.873695 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. May 8 05:43:22.873715 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. May 8 05:43:22.879000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:43:22.917747 kernel: loop3: detected capacity change from 0 to 205544 May 8 05:43:22.977741 kernel: loop4: detected capacity change from 0 to 8 May 8 05:43:22.980749 kernel: loop5: detected capacity change from 0 to 142488 May 8 05:43:23.017742 kernel: loop6: detected capacity change from 0 to 140768 May 8 05:43:23.071756 kernel: loop7: detected capacity change from 0 to 205544 May 8 05:43:23.148100 (sd-merge)[1156]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 8 05:43:23.148916 (sd-merge)[1156]: Merged extensions into '/usr'. May 8 05:43:23.162171 systemd[1]: Reloading requested from client PID 1125 ('systemd-sysext') (unit systemd-sysext.service)... May 8 05:43:23.162198 systemd[1]: Reloading... May 8 05:43:23.236750 zram_generator::config[1179]: No configuration found. May 8 05:43:23.464696 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:43:23.521077 systemd[1]: Reloading finished in 358 ms. May 8 05:43:23.543449 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 05:43:23.545436 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 05:43:23.556876 systemd[1]: Starting ensure-sysext.service... May 8 05:43:23.560345 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 05:43:23.565570 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:43:23.570845 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... May 8 05:43:23.570861 systemd[1]: Reloading... May 8 05:43:23.600436 systemd-udevd[1240]: Using default interface naming scheme 'v255'. May 8 05:43:23.603628 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 05:43:23.604021 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 05:43:23.604995 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 05:43:23.605314 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 8 05:43:23.605384 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. May 8 05:43:23.614444 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. May 8 05:43:23.614502 systemd-tmpfiles[1239]: Skipping /boot May 8 05:43:23.631809 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. May 8 05:43:23.631822 systemd-tmpfiles[1239]: Skipping /boot May 8 05:43:23.676031 zram_generator::config[1268]: No configuration found. May 8 05:43:23.707868 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 05:43:23.818760 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1272) May 8 05:43:23.847766 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 05:43:23.877748 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 8 05:43:23.914380 kernel: ACPI: button: Power Button [PWRF] May 8 05:43:23.891699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:43:23.943712 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 05:43:23.987695 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 05:43:23.989141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 05:43:23.990032 systemd[1]: Reloading finished in 418 ms. May 8 05:43:23.994756 kernel: mousedev: PS/2 mouse device common for all mice May 8 05:43:24.003611 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:43:24.004591 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 05:43:24.010531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:43:24.033128 systemd[1]: Finished ensure-sysext.service. May 8 05:43:24.049653 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:43:24.056927 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 05:43:24.125130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 05:43:24.127887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 05:43:24.132794 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 8 05:43:24.132888 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 8 05:43:24.135032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 05:43:24.145756 kernel: Console: switching to colour dummy device 80x25 May 8 05:43:24.157232 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 8 05:43:24.157447 kernel: [drm] features: -context_init May 8 05:43:24.157539 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 05:43:24.170050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 05:43:24.178987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 05:43:24.179777 kernel: [drm] number of scanouts: 1 May 8 05:43:24.179813 kernel: [drm] number of cap sets: 0 May 8 05:43:24.180022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 05:43:24.182800 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 05:43:24.185332 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 05:43:24.193993 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 05:43:24.195784 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 8 05:43:24.199184 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 05:43:24.205892 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 05:43:24.210874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 05:43:24.219791 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:43:24.219864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:43:24.221203 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 8 05:43:24.221573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 05:43:24.221758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 05:43:24.236440 kernel: Console: switching to colour frame buffer device 160x50 May 8 05:43:24.247752 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 8 05:43:24.252350 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 05:43:24.252669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 05:43:24.255636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 05:43:24.256701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 05:43:24.258545 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 05:43:24.258686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 05:43:24.259088 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 05:43:24.278309 augenrules[1394]: No rules May 8 05:43:24.279860 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 05:43:24.282114 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 05:43:24.286852 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 05:43:24.291272 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 05:43:24.296929 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 05:43:24.297314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 05:43:24.298076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 05:43:24.299627 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 05:43:24.302580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 05:43:24.307486 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 05:43:24.307673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:24.324897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:43:24.325252 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 05:43:24.326490 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 05:43:24.355707 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 05:43:24.356020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 05:43:24.359922 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 05:43:24.365526 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 05:43:24.373405 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 05:43:24.377130 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 05:43:24.377332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 05:43:24.392430 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 05:43:24.432069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:43:24.462953 systemd-networkd[1373]: lo: Link UP May 8 05:43:24.462962 systemd-networkd[1373]: lo: Gained carrier May 8 05:43:24.463867 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 05:43:24.464603 systemd[1]: Reached target time-set.target - System Time Set. May 8 05:43:24.467439 systemd-networkd[1373]: Enumeration completed May 8 05:43:24.467460 systemd-timesyncd[1375]: No network connectivity, watching for changes. May 8 05:43:24.467952 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 05:43:24.470226 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:43:24.470235 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 05:43:24.470825 systemd-networkd[1373]: eth0: Link UP May 8 05:43:24.470833 systemd-networkd[1373]: eth0: Gained carrier May 8 05:43:24.470846 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:43:24.478515 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 05:43:24.487037 systemd-resolved[1374]: Positive Trust Anchors: May 8 05:43:24.487051 systemd-resolved[1374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 05:43:24.487095 systemd-resolved[1374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 05:43:24.488841 systemd-networkd[1373]: eth0: DHCPv4 address 172.24.4.135/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 05:43:24.489870 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. May 8 05:43:24.491612 systemd-resolved[1374]: Using system hostname 'ci-4081-3-3-n-fbb7d486d2.novalocal'. May 8 05:43:24.493158 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 05:43:24.494151 systemd[1]: Reached target network.target - Network. May 8 05:43:24.494628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 05:43:24.495093 systemd[1]: Reached target sysinit.target - System Initialization. May 8 05:43:24.495614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 05:43:24.498448 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 05:43:24.499082 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 05:43:24.499634 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 05:43:24.502029 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 05:43:24.502488 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 05:43:24.502520 systemd[1]: Reached target paths.target - Path Units. May 8 05:43:24.502973 systemd[1]: Reached target timers.target - Timer Units. May 8 05:43:24.506288 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 05:43:24.510450 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 05:43:24.516894 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 05:43:24.521023 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 05:43:24.522937 systemd[1]: Reached target sockets.target - Socket Units. May 8 05:43:24.523521 systemd[1]: Reached target basic.target - Basic System. May 8 05:43:24.526020 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 05:43:24.526052 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 05:43:24.530823 systemd[1]: Starting containerd.service - containerd container runtime... May 8 05:43:24.536293 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 05:43:24.542107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 05:43:24.547875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 05:43:24.554318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 05:43:24.554981 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 05:43:24.562389 jq[1437]: false May 8 05:43:24.562956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 05:43:24.569897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 05:43:24.574763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 05:43:24.584919 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 05:43:24.591971 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 05:43:24.595278 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 05:43:24.595954 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 05:43:24.598877 systemd[1]: Starting update-engine.service - Update Engine... May 8 05:43:24.603061 extend-filesystems[1438]: Found loop4 May 8 05:43:24.610874 extend-filesystems[1438]: Found loop5 May 8 05:43:24.610874 extend-filesystems[1438]: Found loop6 May 8 05:43:24.610874 extend-filesystems[1438]: Found loop7 May 8 05:43:24.610874 extend-filesystems[1438]: Found vda May 8 05:43:24.610828 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 05:43:25.368424 extend-filesystems[1438]: Found vda1 May 8 05:43:25.368424 extend-filesystems[1438]: Found vda2 May 8 05:43:25.368424 extend-filesystems[1438]: Found vda3 May 8 05:43:25.368424 extend-filesystems[1438]: Found usr May 8 05:43:25.368424 extend-filesystems[1438]: Found vda4 May 8 05:43:25.368424 extend-filesystems[1438]: Found vda6 May 8 05:43:25.368424 extend-filesystems[1438]: Found vda7 May 8 05:43:25.368424 extend-filesystems[1438]: Found vda9 May 8 05:43:25.368424 extend-filesystems[1438]: Checking size of /dev/vda9 May 8 05:43:25.524603 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1280) May 8 05:43:25.524631 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 8 05:43:25.524655 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 8 05:43:24.616825 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 05:43:25.383825 dbus-daemon[1434]: [system] SELinux support is enabled May 8 05:43:25.524949 extend-filesystems[1438]: Resized partition /dev/vda9 May 8 05:43:25.525425 update_engine[1451]: I20250508 05:43:24.642263 1451 main.cc:92] Flatcar Update Engine starting May 8 05:43:25.525425 update_engine[1451]: I20250508 05:43:25.406308 1451 update_check_scheduler.cc:74] Next update check in 4m6s May 8 05:43:24.617183 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 05:43:25.537561 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) May 8 05:43:25.537561 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 05:43:25.537561 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 05:43:25.537561 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 8 05:43:24.617477 systemd[1]: motdgen.service: Deactivated successfully. May 8 05:43:25.575713 extend-filesystems[1438]: Resized filesystem in /dev/vda9 May 8 05:43:25.576641 jq[1452]: true May 8 05:43:24.617785 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 05:43:24.637347 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 05:43:24.637501 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 05:43:25.583249 tar[1458]: linux-amd64/helm May 8 05:43:25.340624 systemd-timesyncd[1375]: Contacted time server 168.235.86.33:123 (0.flatcar.pool.ntp.org). May 8 05:43:25.583579 jq[1463]: true May 8 05:43:25.340686 systemd-timesyncd[1375]: Initial clock synchronization to Thu 2025-05-08 05:43:25.340514 UTC. May 8 05:43:25.341701 systemd-resolved[1374]: Clock change detected. Flushing caches. May 8 05:43:25.355674 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 05:43:25.364096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 05:43:25.384213 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 05:43:25.391727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 05:43:25.391754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 05:43:25.431643 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 05:43:25.431681 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 05:43:25.448601 systemd[1]: Started update-engine.service - Update Engine. May 8 05:43:25.452696 systemd-logind[1450]: New seat seat0. May 8 05:43:25.459626 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 05:43:25.501660 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) May 8 05:43:25.501679 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 05:43:25.504735 systemd[1]: Started systemd-logind.service - User Login Management. May 8 05:43:25.515715 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 05:43:25.515901 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 05:43:25.601382 bash[1490]: Updated "/home/core/.ssh/authorized_keys" May 8 05:43:25.605046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 05:43:25.620525 systemd[1]: Starting sshkeys.service... May 8 05:43:25.646346 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 05:43:25.670750 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 05:43:25.749163 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 05:43:25.864982 containerd[1459]: time="2025-05-08T05:43:25.864891376Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 05:43:25.921533 containerd[1459]: time="2025-05-08T05:43:25.919909162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.925504 containerd[1459]: time="2025-05-08T05:43:25.925474520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 05:43:25.926002 containerd[1459]: time="2025-05-08T05:43:25.925986360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 05:43:25.926490 containerd[1459]: time="2025-05-08T05:43:25.926473043Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 05:43:25.926695 containerd[1459]: time="2025-05-08T05:43:25.926676354Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 05:43:25.927031 containerd[1459]: time="2025-05-08T05:43:25.927015280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.927188 containerd[1459]: time="2025-05-08T05:43:25.927168006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:43:25.927474 containerd[1459]: time="2025-05-08T05:43:25.927459162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.927756 containerd[1459]: time="2025-05-08T05:43:25.927733837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:43:25.928385 containerd[1459]: time="2025-05-08T05:43:25.928365392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.928501 containerd[1459]: time="2025-05-08T05:43:25.928481289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:43:25.928581 containerd[1459]: time="2025-05-08T05:43:25.928565938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.929686 containerd[1459]: time="2025-05-08T05:43:25.929463141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.929853 containerd[1459]: time="2025-05-08T05:43:25.929832153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 05:43:25.930175 containerd[1459]: time="2025-05-08T05:43:25.930141583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:43:25.930491 containerd[1459]: time="2025-05-08T05:43:25.930472734Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 05:43:25.930709 containerd[1459]: time="2025-05-08T05:43:25.930692136Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 05:43:25.930901 containerd[1459]: time="2025-05-08T05:43:25.930869007Z" level=info msg="metadata content store policy set" policy=shared May 8 05:43:25.940088 containerd[1459]: time="2025-05-08T05:43:25.940065415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 05:43:25.940206 containerd[1459]: time="2025-05-08T05:43:25.940190059Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.940559842Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.940582976Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.940599226Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.940713821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.941014685Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.941170718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 05:43:25.941205 containerd[1459]: time="2025-05-08T05:43:25.941196627Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941213047Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941230580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941245518Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941259745Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941275244Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941291565Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941305671Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941318615Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941331169Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 05:43:25.941361 containerd[1459]: time="2025-05-08T05:43:25.941351297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941365073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941379279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941393386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941405989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941419825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941456083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941473346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941491139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941507690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941520274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941533048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941545862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941579 containerd[1459]: time="2025-05-08T05:43:25.941561551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941602157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941616895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941628627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941668021Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941686085Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941699189Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941712695Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941724527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941737301Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941748532Z" level=info msg="NRI interface is disabled by configuration." May 8 05:43:25.941831 containerd[1459]: time="2025-05-08T05:43:25.941762288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 05:43:25.942656 containerd[1459]: time="2025-05-08T05:43:25.942049196Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 05:43:25.942656 containerd[1459]: time="2025-05-08T05:43:25.942121652Z" level=info msg="Connect containerd service" May 8 05:43:25.942656 containerd[1459]: time="2025-05-08T05:43:25.942164843Z" level=info msg="using legacy CRI server" May 8 05:43:25.942656 containerd[1459]: time="2025-05-08T05:43:25.942173880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 05:43:25.942656 containerd[1459]: time="2025-05-08T05:43:25.942276993Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 05:43:25.946118 containerd[1459]: time="2025-05-08T05:43:25.945857017Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 05:43:25.946118 containerd[1459]: time="2025-05-08T05:43:25.945977503Z" level=info msg="Start subscribing containerd event" May 8 05:43:25.946118 containerd[1459]: time="2025-05-08T05:43:25.946026405Z" level=info msg="Start recovering state" May 8 05:43:25.946194 containerd[1459]: time="2025-05-08T05:43:25.946126012Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 05:43:25.948746 containerd[1459]: time="2025-05-08T05:43:25.946481468Z" level=info msg="Start event monitor" May 8 05:43:25.948746 containerd[1459]: time="2025-05-08T05:43:25.946505373Z" level=info msg="Start snapshots syncer" May 8 05:43:25.948746 containerd[1459]: time="2025-05-08T05:43:25.946503991Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 05:43:25.948746 containerd[1459]: time="2025-05-08T05:43:25.946524369Z" level=info msg="Start cni network conf syncer for default" May 8 05:43:25.948746 containerd[1459]: time="2025-05-08T05:43:25.946574864Z" level=info msg="Start streaming server" May 8 05:43:25.946692 systemd[1]: Started containerd.service - containerd container runtime. May 8 05:43:25.954671 containerd[1459]: time="2025-05-08T05:43:25.954630963Z" level=info msg="containerd successfully booted in 0.090853s" May 8 05:43:26.132045 tar[1458]: linux-amd64/LICENSE May 8 05:43:26.132305 tar[1458]: linux-amd64/README.md May 8 05:43:26.144265 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 05:43:26.396921 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 05:43:26.424478 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 05:43:26.432861 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 05:43:26.440277 systemd[1]: Started sshd@0-172.24.4.135:22-172.24.4.1:50792.service - OpenSSH per-connection server daemon (172.24.4.1:50792). May 8 05:43:26.444951 systemd[1]: issuegen.service: Deactivated successfully. May 8 05:43:26.445177 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 05:43:26.454234 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 05:43:26.478901 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 05:43:26.489176 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 05:43:26.500381 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 05:43:26.504953 systemd[1]: Reached target getty.target - Login Prompts. May 8 05:43:27.173789 systemd-networkd[1373]: eth0: Gained IPv6LL May 8 05:43:27.179100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 05:43:27.185867 systemd[1]: Reached target network-online.target - Network is Online. May 8 05:43:27.198968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:43:27.208178 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 05:43:27.268299 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 05:43:27.542565 sshd[1526]: Accepted publickey for core from 172.24.4.1 port 50792 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:27.547067 sshd[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:27.573799 systemd-logind[1450]: New session 1 of user core. May 8 05:43:27.577392 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 05:43:27.590035 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 05:43:27.609295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 05:43:27.621778 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 05:43:27.641857 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 05:43:27.763234 systemd[1547]: Queued start job for default target default.target. May 8 05:43:27.768636 systemd[1547]: Created slice app.slice - User Application Slice. May 8 05:43:27.768745 systemd[1547]: Reached target paths.target - Paths. May 8 05:43:27.768825 systemd[1547]: Reached target timers.target - Timers. May 8 05:43:27.772569 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 05:43:27.781425 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 05:43:27.782269 systemd[1547]: Reached target sockets.target - Sockets. May 8 05:43:27.782289 systemd[1547]: Reached target basic.target - Basic System. May 8 05:43:27.782323 systemd[1547]: Reached target default.target - Main User Target. May 8 05:43:27.782348 systemd[1547]: Startup finished in 134ms. May 8 05:43:27.782726 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 05:43:27.791680 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 05:43:28.291947 systemd[1]: Started sshd@1-172.24.4.135:22-172.24.4.1:59862.service - OpenSSH per-connection server daemon (172.24.4.1:59862). May 8 05:43:28.979831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:43:28.980305 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:43:29.493724 sshd[1559]: Accepted publickey for core from 172.24.4.1 port 59862 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:29.496223 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:29.509408 systemd-logind[1450]: New session 2 of user core. May 8 05:43:29.514219 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 05:43:30.138702 sshd[1559]: pam_unix(sshd:session): session closed for user core May 8 05:43:30.147182 systemd[1]: sshd@1-172.24.4.135:22-172.24.4.1:59862.service: Deactivated successfully. May 8 05:43:30.150336 systemd[1]: session-2.scope: Deactivated successfully. May 8 05:43:30.154014 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. May 8 05:43:30.158208 systemd[1]: Started sshd@2-172.24.4.135:22-172.24.4.1:59874.service - OpenSSH per-connection server daemon (172.24.4.1:59874). May 8 05:43:30.165203 systemd-logind[1450]: Removed session 2. May 8 05:43:30.328100 kubelet[1567]: E0508 05:43:30.327965 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:43:30.332111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:43:30.332422 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:43:30.333299 systemd[1]: kubelet.service: Consumed 2.072s CPU time. May 8 05:43:31.315357 sshd[1578]: Accepted publickey for core from 172.24.4.1 port 59874 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:31.317879 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:31.328538 systemd-logind[1450]: New session 3 of user core. May 8 05:43:31.339834 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 05:43:31.541778 login[1530]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 05:43:31.555640 systemd-logind[1450]: New session 4 of user core. May 8 05:43:31.563532 login[1531]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 05:43:31.574144 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 05:43:31.591868 systemd-logind[1450]: New session 5 of user core. May 8 05:43:31.597849 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 05:43:31.922369 sshd[1578]: pam_unix(sshd:session): session closed for user core May 8 05:43:31.928423 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. May 8 05:43:31.929380 systemd[1]: sshd@2-172.24.4.135:22-172.24.4.1:59874.service: Deactivated successfully. May 8 05:43:31.932655 systemd[1]: session-3.scope: Deactivated successfully. May 8 05:43:31.936772 systemd-logind[1450]: Removed session 3. May 8 05:43:32.317027 coreos-metadata[1433]: May 08 05:43:32.316 WARN failed to locate config-drive, using the metadata service API instead May 8 05:43:32.381346 coreos-metadata[1433]: May 08 05:43:32.381 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 8 05:43:32.561079 coreos-metadata[1433]: May 08 05:43:32.560 INFO Fetch successful May 8 05:43:32.561079 coreos-metadata[1433]: May 08 05:43:32.561 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 05:43:32.578349 coreos-metadata[1433]: May 08 05:43:32.578 INFO Fetch successful May 8 05:43:32.578349 coreos-metadata[1433]: May 08 05:43:32.578 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 8 05:43:32.592783 coreos-metadata[1433]: May 08 05:43:32.592 INFO Fetch successful May 8 05:43:32.592783 coreos-metadata[1433]: May 08 05:43:32.592 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 8 05:43:32.606701 coreos-metadata[1433]: May 08 05:43:32.606 INFO Fetch successful May 8 05:43:32.606701 coreos-metadata[1433]: May 08 05:43:32.606 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 8 05:43:32.620364 coreos-metadata[1433]: May 08 05:43:32.620 INFO Fetch successful May 8 05:43:32.620364 coreos-metadata[1433]: May 08 05:43:32.620 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 8 05:43:32.634469 coreos-metadata[1433]: May 08 05:43:32.634 INFO Fetch successful May 8 05:43:32.675004 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 05:43:32.676787 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 05:43:32.769012 coreos-metadata[1500]: May 08 05:43:32.768 WARN failed to locate config-drive, using the metadata service API instead May 8 05:43:32.811995 coreos-metadata[1500]: May 08 05:43:32.811 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 8 05:43:32.822754 coreos-metadata[1500]: May 08 05:43:32.822 INFO Fetch successful May 8 05:43:32.822754 coreos-metadata[1500]: May 08 05:43:32.822 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 8 05:43:32.834353 coreos-metadata[1500]: May 08 05:43:32.834 INFO Fetch successful May 8 05:43:32.841144 unknown[1500]: wrote ssh authorized keys file for user: core May 8 05:43:32.885346 update-ssh-keys[1619]: Updated "/home/core/.ssh/authorized_keys" May 8 05:43:32.886343 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 05:43:32.889748 systemd[1]: Finished sshkeys.service. May 8 05:43:32.894145 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 05:43:32.894399 systemd[1]: Startup finished in 1.274s (kernel) + 14.701s (initrd) + 10.702s (userspace) = 26.678s. May 8 05:43:40.377698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 05:43:40.387828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:43:40.697256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:43:40.711994 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:43:40.789798 kubelet[1631]: E0508 05:43:40.789712 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:43:40.796028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:43:40.796318 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:43:41.947963 systemd[1]: Started sshd@3-172.24.4.135:22-172.24.4.1:52066.service - OpenSSH per-connection server daemon (172.24.4.1:52066). May 8 05:43:43.258526 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 52066 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:43.261206 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:43.273021 systemd-logind[1450]: New session 6 of user core. May 8 05:43:43.280747 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 05:43:43.960278 sshd[1640]: pam_unix(sshd:session): session closed for user core May 8 05:43:43.971835 systemd[1]: sshd@3-172.24.4.135:22-172.24.4.1:52066.service: Deactivated successfully. May 8 05:43:43.975135 systemd[1]: session-6.scope: Deactivated successfully. May 8 05:43:43.977009 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. May 8 05:43:43.985100 systemd[1]: Started sshd@4-172.24.4.135:22-172.24.4.1:49810.service - OpenSSH per-connection server daemon (172.24.4.1:49810). May 8 05:43:43.987631 systemd-logind[1450]: Removed session 6. May 8 05:43:45.270385 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 49810 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:45.273135 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:45.284255 systemd-logind[1450]: New session 7 of user core. May 8 05:43:45.287723 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 05:43:45.885572 sshd[1647]: pam_unix(sshd:session): session closed for user core May 8 05:43:45.896543 systemd[1]: sshd@4-172.24.4.135:22-172.24.4.1:49810.service: Deactivated successfully. May 8 05:43:45.899834 systemd[1]: session-7.scope: Deactivated successfully. May 8 05:43:45.901860 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. May 8 05:43:45.917085 systemd[1]: Started sshd@5-172.24.4.135:22-172.24.4.1:49814.service - OpenSSH per-connection server daemon (172.24.4.1:49814). May 8 05:43:45.920560 systemd-logind[1450]: Removed session 7. May 8 05:43:47.206041 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 49814 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:47.208532 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:47.217150 systemd-logind[1450]: New session 8 of user core. May 8 05:43:47.228710 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 05:43:47.948316 sshd[1654]: pam_unix(sshd:session): session closed for user core May 8 05:43:47.965779 systemd[1]: sshd@5-172.24.4.135:22-172.24.4.1:49814.service: Deactivated successfully. May 8 05:43:47.968623 systemd[1]: session-8.scope: Deactivated successfully. May 8 05:43:47.970174 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. May 8 05:43:47.981096 systemd[1]: Started sshd@6-172.24.4.135:22-172.24.4.1:49816.service - OpenSSH per-connection server daemon (172.24.4.1:49816). May 8 05:43:47.985421 systemd-logind[1450]: Removed session 8. May 8 05:43:49.240089 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 49816 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:49.242920 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:49.252069 systemd-logind[1450]: New session 9 of user core. May 8 05:43:49.263741 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 05:43:49.742132 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 05:43:49.742826 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:43:49.761387 sudo[1664]: pam_unix(sudo:session): session closed for user root May 8 05:43:49.990278 sshd[1661]: pam_unix(sshd:session): session closed for user core May 8 05:43:50.002062 systemd[1]: sshd@6-172.24.4.135:22-172.24.4.1:49816.service: Deactivated successfully. May 8 05:43:50.005997 systemd[1]: session-9.scope: Deactivated successfully. May 8 05:43:50.007862 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. May 8 05:43:50.019026 systemd[1]: Started sshd@7-172.24.4.135:22-172.24.4.1:49830.service - OpenSSH per-connection server daemon (172.24.4.1:49830). May 8 05:43:50.022275 systemd-logind[1450]: Removed session 9. May 8 05:43:50.877741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 05:43:50.885838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:43:51.187250 sshd[1669]: Accepted publickey for core from 172.24.4.1 port 49830 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:51.189865 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:51.199849 systemd-logind[1450]: New session 10 of user core. May 8 05:43:51.212745 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 05:43:51.289874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:43:51.303983 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:43:51.387787 kubelet[1680]: E0508 05:43:51.387698 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:43:51.392355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:43:51.392930 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:43:51.680156 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 05:43:51.680844 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:43:51.687960 sudo[1687]: pam_unix(sudo:session): session closed for user root May 8 05:43:51.699099 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 05:43:51.699789 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:43:51.727986 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 05:43:51.731701 auditctl[1690]: No rules May 8 05:43:51.732401 systemd[1]: audit-rules.service: Deactivated successfully. May 8 05:43:51.732875 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 05:43:51.746097 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 05:43:51.794289 augenrules[1708]: No rules May 8 05:43:51.795665 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 05:43:51.798983 sudo[1686]: pam_unix(sudo:session): session closed for user root May 8 05:43:52.005092 sshd[1669]: pam_unix(sshd:session): session closed for user core May 8 05:43:52.019074 systemd[1]: sshd@7-172.24.4.135:22-172.24.4.1:49830.service: Deactivated successfully. May 8 05:43:52.022610 systemd[1]: session-10.scope: Deactivated successfully. May 8 05:43:52.026738 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. May 8 05:43:52.040966 systemd[1]: Started sshd@8-172.24.4.135:22-172.24.4.1:49842.service - OpenSSH per-connection server daemon (172.24.4.1:49842). May 8 05:43:52.042989 systemd-logind[1450]: Removed session 10. May 8 05:43:53.156126 sshd[1716]: Accepted publickey for core from 172.24.4.1 port 49842 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:43:53.158901 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:43:53.168732 systemd-logind[1450]: New session 11 of user core. May 8 05:43:53.177742 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 05:43:53.635515 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 05:43:53.636158 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:43:54.297775 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 05:43:54.298017 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 05:43:55.055095 dockerd[1734]: time="2025-05-08T05:43:55.054809871Z" level=info msg="Starting up" May 8 05:43:55.276524 dockerd[1734]: time="2025-05-08T05:43:55.276271682Z" level=info msg="Loading containers: start." May 8 05:43:55.403476 kernel: Initializing XFRM netlink socket May 8 05:43:55.532484 systemd-networkd[1373]: docker0: Link UP May 8 05:43:55.561131 dockerd[1734]: time="2025-05-08T05:43:55.560970756Z" level=info msg="Loading containers: done." May 8 05:43:55.584855 dockerd[1734]: time="2025-05-08T05:43:55.584681315Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 05:43:55.585351 dockerd[1734]: time="2025-05-08T05:43:55.584982199Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 05:43:55.585351 dockerd[1734]: time="2025-05-08T05:43:55.585184779Z" level=info msg="Daemon has completed initialization" May 8 05:43:55.793547 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 05:43:55.795648 dockerd[1734]: time="2025-05-08T05:43:55.793172871Z" level=info msg="API listen on /run/docker.sock" May 8 05:43:57.788706 containerd[1459]: time="2025-05-08T05:43:57.788162453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 05:43:58.702508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2208395979.mount: Deactivated successfully. May 8 05:44:00.334321 containerd[1459]: time="2025-05-08T05:44:00.334247990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:00.336190 containerd[1459]: time="2025-05-08T05:44:00.336154085Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 8 05:44:00.336841 containerd[1459]: time="2025-05-08T05:44:00.336269021Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:00.341455 containerd[1459]: time="2025-05-08T05:44:00.340206015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:00.344748 containerd[1459]: time="2025-05-08T05:44:00.344717611Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.556468375s" May 8 05:44:00.344854 containerd[1459]: time="2025-05-08T05:44:00.344835743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 8 05:44:00.347338 containerd[1459]: time="2025-05-08T05:44:00.347306971Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 05:44:01.626869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 05:44:01.636281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:01.751619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:01.758032 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:44:01.810899 kubelet[1939]: E0508 05:44:01.810859 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:44:01.813182 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:44:01.813327 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:44:02.683553 containerd[1459]: time="2025-05-08T05:44:02.683261882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:02.707193 containerd[1459]: time="2025-05-08T05:44:02.707085984Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 8 05:44:02.751113 containerd[1459]: time="2025-05-08T05:44:02.751003112Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:02.818831 containerd[1459]: time="2025-05-08T05:44:02.818735727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:02.823132 containerd[1459]: time="2025-05-08T05:44:02.822292314Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.474935259s" May 8 05:44:02.823132 containerd[1459]: time="2025-05-08T05:44:02.822370691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 8 05:44:02.823884 containerd[1459]: time="2025-05-08T05:44:02.823815968Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 05:44:04.667868 containerd[1459]: time="2025-05-08T05:44:04.667745483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:04.669019 containerd[1459]: time="2025-05-08T05:44:04.668970205Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 8 05:44:04.670364 containerd[1459]: time="2025-05-08T05:44:04.670306436Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:04.673731 containerd[1459]: time="2025-05-08T05:44:04.673688884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:04.677450 containerd[1459]: time="2025-05-08T05:44:04.676314449Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.852057933s" May 8 05:44:04.677450 containerd[1459]: time="2025-05-08T05:44:04.676358181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 8 05:44:04.678650 containerd[1459]: time="2025-05-08T05:44:04.678623769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 05:44:06.111820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2876726419.mount: Deactivated successfully. May 8 05:44:06.815086 containerd[1459]: time="2025-05-08T05:44:06.815042072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:06.816228 containerd[1459]: time="2025-05-08T05:44:06.816191071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 8 05:44:06.817466 containerd[1459]: time="2025-05-08T05:44:06.817416814Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:06.820197 containerd[1459]: time="2025-05-08T05:44:06.820153416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:06.821072 containerd[1459]: time="2025-05-08T05:44:06.820845768Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.14212691s" May 8 05:44:06.821072 containerd[1459]: time="2025-05-08T05:44:06.820931839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 8 05:44:06.821618 containerd[1459]: time="2025-05-08T05:44:06.821428853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 05:44:07.546263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866937333.mount: Deactivated successfully. May 8 05:44:10.226260 update_engine[1451]: I20250508 05:44:10.226202 1451 update_attempter.cc:509] Updating boot flags... May 8 05:44:10.260938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2010) May 8 05:44:10.448376 containerd[1459]: time="2025-05-08T05:44:10.448336660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:10.450997 containerd[1459]: time="2025-05-08T05:44:10.450967832Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 8 05:44:10.452636 containerd[1459]: time="2025-05-08T05:44:10.452591471Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:10.455818 containerd[1459]: time="2025-05-08T05:44:10.455785891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:10.457001 containerd[1459]: time="2025-05-08T05:44:10.456976397Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.635487871s" May 8 05:44:10.457081 containerd[1459]: time="2025-05-08T05:44:10.457064984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 05:44:10.457857 containerd[1459]: time="2025-05-08T05:44:10.457763715Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 05:44:11.076814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710056985.mount: Deactivated successfully. May 8 05:44:11.091018 containerd[1459]: time="2025-05-08T05:44:11.090900605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:11.093162 containerd[1459]: time="2025-05-08T05:44:11.093058577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 8 05:44:11.095029 containerd[1459]: time="2025-05-08T05:44:11.094919973Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:11.100700 containerd[1459]: time="2025-05-08T05:44:11.100545950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:11.104219 containerd[1459]: time="2025-05-08T05:44:11.102829488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 644.851259ms" May 8 05:44:11.104219 containerd[1459]: time="2025-05-08T05:44:11.102903237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 05:44:11.104219 containerd[1459]: time="2025-05-08T05:44:11.103843052Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 05:44:11.767530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637682303.mount: Deactivated successfully. May 8 05:44:11.878766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 05:44:11.889887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:12.222366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:12.237694 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:44:12.339373 kubelet[2041]: E0508 05:44:12.339335 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:44:12.341964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:44:12.342179 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:44:15.261771 containerd[1459]: time="2025-05-08T05:44:15.261594584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:15.264655 containerd[1459]: time="2025-05-08T05:44:15.264560112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 8 05:44:15.267983 containerd[1459]: time="2025-05-08T05:44:15.267575664Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:15.274584 containerd[1459]: time="2025-05-08T05:44:15.274429753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:15.278237 containerd[1459]: time="2025-05-08T05:44:15.277993684Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.1740959s" May 8 05:44:15.278237 containerd[1459]: time="2025-05-08T05:44:15.278064196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 8 05:44:18.880216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:18.892807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:18.936253 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-11.scope)... May 8 05:44:18.936273 systemd[1]: Reloading... May 8 05:44:19.049473 zram_generator::config[2156]: No configuration found. May 8 05:44:19.483729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:44:19.567512 systemd[1]: Reloading finished in 630 ms. May 8 05:44:19.611095 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 05:44:19.611172 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 05:44:19.611535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:19.624138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:19.713107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:19.726682 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 05:44:20.018506 kubelet[2221]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:44:20.018506 kubelet[2221]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 05:44:20.018506 kubelet[2221]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:44:20.018506 kubelet[2221]: I0508 05:44:20.017573 2221 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 05:44:20.738863 kubelet[2221]: I0508 05:44:20.738803 2221 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 05:44:20.740485 kubelet[2221]: I0508 05:44:20.739093 2221 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 05:44:20.740485 kubelet[2221]: I0508 05:44:20.739679 2221 server.go:929] "Client rotation is on, will bootstrap in background" May 8 05:44:20.780025 kubelet[2221]: I0508 05:44:20.779713 2221 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:44:20.783098 kubelet[2221]: E0508 05:44:20.782951 2221 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:20.793995 kubelet[2221]: E0508 05:44:20.793789 2221 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 05:44:20.793995 kubelet[2221]: I0508 05:44:20.793849 2221 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 05:44:20.804189 kubelet[2221]: I0508 05:44:20.804068 2221 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 05:44:20.804380 kubelet[2221]: I0508 05:44:20.804283 2221 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 05:44:20.804775 kubelet[2221]: I0508 05:44:20.804668 2221 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 05:44:20.805186 kubelet[2221]: I0508 05:44:20.804740 2221 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-fbb7d486d2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 05:44:20.805186 kubelet[2221]: I0508 05:44:20.805165 2221 topology_manager.go:138] "Creating topology manager with none policy" May 8 05:44:20.805186 kubelet[2221]: I0508 05:44:20.805190 2221 container_manager_linux.go:300] "Creating device plugin manager" May 8 05:44:20.805551 kubelet[2221]: I0508 05:44:20.805402 2221 state_mem.go:36] "Initialized new in-memory state store" May 8 05:44:20.811266 kubelet[2221]: I0508 05:44:20.810858 2221 kubelet.go:408] "Attempting to sync node with API server" May 8 05:44:20.811266 kubelet[2221]: I0508 05:44:20.810910 2221 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 05:44:20.811266 kubelet[2221]: I0508 05:44:20.810965 2221 kubelet.go:314] "Adding apiserver pod source" May 8 05:44:20.811266 kubelet[2221]: I0508 05:44:20.810995 2221 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 05:44:20.817907 kubelet[2221]: W0508 05:44:20.817540 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-fbb7d486d2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:20.817907 kubelet[2221]: E0508 05:44:20.817662 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-fbb7d486d2.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:20.822379 kubelet[2221]: W0508 05:44:20.822178 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:20.822379 kubelet[2221]: E0508 05:44:20.822294 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:20.823023 kubelet[2221]: I0508 05:44:20.822959 2221 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 05:44:20.827752 kubelet[2221]: I0508 05:44:20.827521 2221 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 05:44:20.829539 kubelet[2221]: W0508 05:44:20.829091 2221 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 05:44:20.834203 kubelet[2221]: I0508 05:44:20.834147 2221 server.go:1269] "Started kubelet" May 8 05:44:20.837341 kubelet[2221]: I0508 05:44:20.837237 2221 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 05:44:20.845465 kubelet[2221]: I0508 05:44:20.845337 2221 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 05:44:20.846579 kubelet[2221]: I0508 05:44:20.846132 2221 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 05:44:20.849483 kubelet[2221]: I0508 05:44:20.847315 2221 server.go:460] "Adding debug handlers to kubelet server" May 8 05:44:20.855739 kubelet[2221]: I0508 05:44:20.855213 2221 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 05:44:20.857061 kubelet[2221]: E0508 05:44:20.850633 2221 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.135:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-fbb7d486d2.novalocal.183d76fd5035719d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-fbb7d486d2.novalocal,UID:ci-4081-3-3-n-fbb7d486d2.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-fbb7d486d2.novalocal,},FirstTimestamp:2025-05-08 05:44:20.834103709 +0000 UTC m=+1.101081368,LastTimestamp:2025-05-08 05:44:20.834103709 +0000 UTC m=+1.101081368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-fbb7d486d2.novalocal,}" May 8 05:44:20.860094 kubelet[2221]: I0508 05:44:20.860053 2221 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 05:44:20.865090 kubelet[2221]: I0508 05:44:20.865042 2221 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 05:44:20.865423 kubelet[2221]: E0508 05:44:20.865365 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:20.868019 kubelet[2221]: I0508 05:44:20.864642 2221 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 05:44:20.868253 kubelet[2221]: I0508 05:44:20.868211 2221 reconciler.go:26] "Reconciler: start to sync state" May 8 05:44:20.868523 kubelet[2221]: E0508 05:44:20.868393 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-fbb7d486d2.novalocal?timeout=10s\": dial tcp 172.24.4.135:6443: connect: connection refused" interval="200ms" May 8 05:44:20.868679 kubelet[2221]: W0508 05:44:20.868597 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:20.868792 kubelet[2221]: E0508 05:44:20.868700 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:20.869732 kubelet[2221]: I0508 05:44:20.869686 2221 factory.go:221] Registration of the systemd container factory successfully May 8 05:44:20.870174 kubelet[2221]: I0508 05:44:20.870085 2221 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 05:44:20.874241 kubelet[2221]: I0508 05:44:20.873495 2221 factory.go:221] Registration of the containerd container factory successfully May 8 05:44:20.886369 kubelet[2221]: I0508 05:44:20.886327 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 05:44:20.887298 kubelet[2221]: I0508 05:44:20.887285 2221 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 05:44:20.887389 kubelet[2221]: I0508 05:44:20.887379 2221 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 05:44:20.887576 kubelet[2221]: I0508 05:44:20.887564 2221 kubelet.go:2321] "Starting kubelet main sync loop" May 8 05:44:20.887699 kubelet[2221]: E0508 05:44:20.887679 2221 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 05:44:20.894545 kubelet[2221]: W0508 05:44:20.894481 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:20.894663 kubelet[2221]: E0508 05:44:20.894643 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:20.908295 kubelet[2221]: I0508 05:44:20.908271 2221 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 05:44:20.908295 kubelet[2221]: I0508 05:44:20.908289 2221 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 05:44:20.908385 kubelet[2221]: I0508 05:44:20.908303 2221 state_mem.go:36] "Initialized new in-memory state store" May 8 05:44:20.913127 kubelet[2221]: I0508 05:44:20.913100 2221 policy_none.go:49] "None policy: Start" May 8 05:44:20.913664 kubelet[2221]: I0508 05:44:20.913643 2221 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 05:44:20.914045 kubelet[2221]: I0508 05:44:20.913749 2221 state_mem.go:35] "Initializing new in-memory state store" May 8 05:44:20.923096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 05:44:20.936625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 05:44:20.947491 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 05:44:20.950501 kubelet[2221]: I0508 05:44:20.950098 2221 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 05:44:20.951024 kubelet[2221]: I0508 05:44:20.950974 2221 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 05:44:20.951024 kubelet[2221]: I0508 05:44:20.950994 2221 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 05:44:20.951261 kubelet[2221]: I0508 05:44:20.951225 2221 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 05:44:20.953159 kubelet[2221]: E0508 05:44:20.953060 2221 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:21.002600 systemd[1]: Created slice kubepods-burstable-pod0122da21cf2d9b7749a3693da6096c58.slice - libcontainer container kubepods-burstable-pod0122da21cf2d9b7749a3693da6096c58.slice. May 8 05:44:21.013526 systemd[1]: Created slice kubepods-burstable-pod3db5aeb23db1f6e225605fc940b879a9.slice - libcontainer container kubepods-burstable-pod3db5aeb23db1f6e225605fc940b879a9.slice. May 8 05:44:21.026383 systemd[1]: Created slice kubepods-burstable-podfb70355027aa88c21296b99cda0bae8f.slice - libcontainer container kubepods-burstable-podfb70355027aa88c21296b99cda0bae8f.slice. May 8 05:44:21.054861 kubelet[2221]: I0508 05:44:21.054753 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.055730 kubelet[2221]: E0508 05:44:21.055292 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.135:6443/api/v1/nodes\": dial tcp 172.24.4.135:6443: connect: connection refused" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.069756 kubelet[2221]: E0508 05:44:21.069644 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-fbb7d486d2.novalocal?timeout=10s\": dial tcp 172.24.4.135:6443: connect: connection refused" interval="400ms" May 8 05:44:21.069756 kubelet[2221]: I0508 05:44:21.069664 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.069756 kubelet[2221]: I0508 05:44:21.069771 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070086 kubelet[2221]: I0508 05:44:21.069826 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070086 kubelet[2221]: I0508 05:44:21.069872 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070086 kubelet[2221]: I0508 05:44:21.069922 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070086 kubelet[2221]: I0508 05:44:21.069966 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070328 kubelet[2221]: I0508 05:44:21.070008 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070328 kubelet[2221]: I0508 05:44:21.070054 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.070328 kubelet[2221]: I0508 05:44:21.070097 2221 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0122da21cf2d9b7749a3693da6096c58-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"0122da21cf2d9b7749a3693da6096c58\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.258889 kubelet[2221]: I0508 05:44:21.258700 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.259513 kubelet[2221]: E0508 05:44:21.259391 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.135:6443/api/v1/nodes\": dial tcp 172.24.4.135:6443: connect: connection refused" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.310825 containerd[1459]: time="2025-05-08T05:44:21.310702306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:0122da21cf2d9b7749a3693da6096c58,Namespace:kube-system,Attempt:0,}" May 8 05:44:21.334663 containerd[1459]: time="2025-05-08T05:44:21.334533835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:fb70355027aa88c21296b99cda0bae8f,Namespace:kube-system,Attempt:0,}" May 8 05:44:21.335415 containerd[1459]: time="2025-05-08T05:44:21.334550737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:3db5aeb23db1f6e225605fc940b879a9,Namespace:kube-system,Attempt:0,}" May 8 05:44:21.471374 kubelet[2221]: E0508 05:44:21.471226 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-fbb7d486d2.novalocal?timeout=10s\": dial tcp 172.24.4.135:6443: connect: connection refused" interval="800ms" May 8 05:44:21.663501 kubelet[2221]: I0508 05:44:21.663406 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.664549 kubelet[2221]: E0508 05:44:21.664434 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.135:6443/api/v1/nodes\": dial tcp 172.24.4.135:6443: connect: connection refused" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:21.825437 kubelet[2221]: W0508 05:44:21.825290 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:21.825437 kubelet[2221]: E0508 05:44:21.825387 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:21.935336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799586700.mount: Deactivated successfully. May 8 05:44:21.943915 containerd[1459]: time="2025-05-08T05:44:21.943811136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:44:21.948535 containerd[1459]: time="2025-05-08T05:44:21.948357900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 8 05:44:21.949881 containerd[1459]: time="2025-05-08T05:44:21.949782804Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:44:21.953279 containerd[1459]: time="2025-05-08T05:44:21.953199437Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:44:21.955963 containerd[1459]: time="2025-05-08T05:44:21.955731268Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 05:44:21.955963 containerd[1459]: time="2025-05-08T05:44:21.955886470Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:44:21.957961 containerd[1459]: time="2025-05-08T05:44:21.957787859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 05:44:21.965242 containerd[1459]: time="2025-05-08T05:44:21.965034199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:44:21.970424 containerd[1459]: time="2025-05-08T05:44:21.969518315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 658.670475ms" May 8 05:44:21.971389 containerd[1459]: time="2025-05-08T05:44:21.971329574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.478473ms" May 8 05:44:21.979918 containerd[1459]: time="2025-05-08T05:44:21.979845247Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.794141ms" May 8 05:44:22.100139 kubelet[2221]: W0508 05:44:22.100084 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:22.100741 kubelet[2221]: E0508 05:44:22.100688 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:22.181137 kubelet[2221]: W0508 05:44:22.180840 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-fbb7d486d2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:22.181137 kubelet[2221]: E0508 05:44:22.181062 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-fbb7d486d2.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:22.187908 containerd[1459]: time="2025-05-08T05:44:22.186161916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:22.187908 containerd[1459]: time="2025-05-08T05:44:22.186954814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:22.187908 containerd[1459]: time="2025-05-08T05:44:22.186982936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.187908 containerd[1459]: time="2025-05-08T05:44:22.187056365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.190300 containerd[1459]: time="2025-05-08T05:44:22.189769697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:22.190609 containerd[1459]: time="2025-05-08T05:44:22.190070591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:22.190609 containerd[1459]: time="2025-05-08T05:44:22.190139591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.193300 containerd[1459]: time="2025-05-08T05:44:22.192611471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.203583 containerd[1459]: time="2025-05-08T05:44:22.203273130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:22.204703 containerd[1459]: time="2025-05-08T05:44:22.203389939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:22.204703 containerd[1459]: time="2025-05-08T05:44:22.204517275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.205691 containerd[1459]: time="2025-05-08T05:44:22.205364084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:22.221926 systemd[1]: Started cri-containerd-e79c9e89ef429c05b00b0260fd0c3394a2ff9f87ab27a2cc9cb69e590e71d970.scope - libcontainer container e79c9e89ef429c05b00b0260fd0c3394a2ff9f87ab27a2cc9cb69e590e71d970. May 8 05:44:22.235381 kubelet[2221]: W0508 05:44:22.235105 2221 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.135:6443: connect: connection refused May 8 05:44:22.235381 kubelet[2221]: E0508 05:44:22.235168 2221 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.135:6443: connect: connection refused" logger="UnhandledError" May 8 05:44:22.244566 systemd[1]: Started cri-containerd-33e703c211b301141de046c19012d4287f45c9b32ee955bd06176ba60a87b454.scope - libcontainer container 33e703c211b301141de046c19012d4287f45c9b32ee955bd06176ba60a87b454. May 8 05:44:22.245719 systemd[1]: Started cri-containerd-ec712a6e94d2bfeba66789e368d123a4cc640a33e399a5ee158c08df8b8af361.scope - libcontainer container ec712a6e94d2bfeba66789e368d123a4cc640a33e399a5ee158c08df8b8af361. May 8 05:44:22.273874 kubelet[2221]: E0508 05:44:22.273829 2221 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-fbb7d486d2.novalocal?timeout=10s\": dial tcp 172.24.4.135:6443: connect: connection refused" interval="1.6s" May 8 05:44:22.306290 containerd[1459]: time="2025-05-08T05:44:22.306133013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:fb70355027aa88c21296b99cda0bae8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e79c9e89ef429c05b00b0260fd0c3394a2ff9f87ab27a2cc9cb69e590e71d970\"" May 8 05:44:22.309838 containerd[1459]: time="2025-05-08T05:44:22.309809223Z" level=info msg="CreateContainer within sandbox \"e79c9e89ef429c05b00b0260fd0c3394a2ff9f87ab27a2cc9cb69e590e71d970\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 05:44:22.317142 containerd[1459]: time="2025-05-08T05:44:22.317089316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:3db5aeb23db1f6e225605fc940b879a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"33e703c211b301141de046c19012d4287f45c9b32ee955bd06176ba60a87b454\"" May 8 05:44:22.319900 containerd[1459]: time="2025-05-08T05:44:22.319684978Z" level=info msg="CreateContainer within sandbox \"33e703c211b301141de046c19012d4287f45c9b32ee955bd06176ba60a87b454\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 05:44:22.325878 containerd[1459]: time="2025-05-08T05:44:22.325837263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal,Uid:0122da21cf2d9b7749a3693da6096c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec712a6e94d2bfeba66789e368d123a4cc640a33e399a5ee158c08df8b8af361\"" May 8 05:44:22.329155 containerd[1459]: time="2025-05-08T05:44:22.329058099Z" level=info msg="CreateContainer within sandbox \"ec712a6e94d2bfeba66789e368d123a4cc640a33e399a5ee158c08df8b8af361\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 05:44:22.343721 containerd[1459]: time="2025-05-08T05:44:22.343651918Z" level=info msg="CreateContainer within sandbox \"e79c9e89ef429c05b00b0260fd0c3394a2ff9f87ab27a2cc9cb69e590e71d970\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"623689b6abdef7fd62aea8243e2f58cc20da939bb3b20feb7c48966a157e6499\"" May 8 05:44:22.344333 containerd[1459]: time="2025-05-08T05:44:22.344298541Z" level=info msg="StartContainer for \"623689b6abdef7fd62aea8243e2f58cc20da939bb3b20feb7c48966a157e6499\"" May 8 05:44:22.358297 containerd[1459]: time="2025-05-08T05:44:22.358249755Z" level=info msg="CreateContainer within sandbox \"33e703c211b301141de046c19012d4287f45c9b32ee955bd06176ba60a87b454\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec50c932e98ab08be589fc713f54718aff223689f8b352349708b6495d7e4aba\"" May 8 05:44:22.359427 containerd[1459]: time="2025-05-08T05:44:22.359386518Z" level=info msg="StartContainer for \"ec50c932e98ab08be589fc713f54718aff223689f8b352349708b6495d7e4aba\"" May 8 05:44:22.368112 containerd[1459]: time="2025-05-08T05:44:22.368047462Z" level=info msg="CreateContainer within sandbox \"ec712a6e94d2bfeba66789e368d123a4cc640a33e399a5ee158c08df8b8af361\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3e9a484a1bc2552998e4f1f948e024131c1179cbca6b9478406ef28f9c54c22\"" May 8 05:44:22.369863 containerd[1459]: time="2025-05-08T05:44:22.369766178Z" level=info msg="StartContainer for \"f3e9a484a1bc2552998e4f1f948e024131c1179cbca6b9478406ef28f9c54c22\"" May 8 05:44:22.380066 systemd[1]: Started cri-containerd-623689b6abdef7fd62aea8243e2f58cc20da939bb3b20feb7c48966a157e6499.scope - libcontainer container 623689b6abdef7fd62aea8243e2f58cc20da939bb3b20feb7c48966a157e6499. May 8 05:44:22.402584 systemd[1]: Started cri-containerd-ec50c932e98ab08be589fc713f54718aff223689f8b352349708b6495d7e4aba.scope - libcontainer container ec50c932e98ab08be589fc713f54718aff223689f8b352349708b6495d7e4aba. May 8 05:44:22.411634 systemd[1]: Started cri-containerd-f3e9a484a1bc2552998e4f1f948e024131c1179cbca6b9478406ef28f9c54c22.scope - libcontainer container f3e9a484a1bc2552998e4f1f948e024131c1179cbca6b9478406ef28f9c54c22. May 8 05:44:22.466488 containerd[1459]: time="2025-05-08T05:44:22.465022552Z" level=info msg="StartContainer for \"623689b6abdef7fd62aea8243e2f58cc20da939bb3b20feb7c48966a157e6499\" returns successfully" May 8 05:44:22.473335 kubelet[2221]: I0508 05:44:22.473228 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:22.473859 kubelet[2221]: E0508 05:44:22.473791 2221 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.135:6443/api/v1/nodes\": dial tcp 172.24.4.135:6443: connect: connection refused" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:22.490561 containerd[1459]: time="2025-05-08T05:44:22.490454822Z" level=info msg="StartContainer for \"ec50c932e98ab08be589fc713f54718aff223689f8b352349708b6495d7e4aba\" returns successfully" May 8 05:44:22.500571 containerd[1459]: time="2025-05-08T05:44:22.500511216Z" level=info msg="StartContainer for \"f3e9a484a1bc2552998e4f1f948e024131c1179cbca6b9478406ef28f9c54c22\" returns successfully" May 8 05:44:24.076046 kubelet[2221]: I0508 05:44:24.076015 2221 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:24.560004 kubelet[2221]: E0508 05:44:24.559971 2221 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:24.715332 kubelet[2221]: I0508 05:44:24.714756 2221 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:24.715332 kubelet[2221]: E0508 05:44:24.714790 2221 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-fbb7d486d2.novalocal\": node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:24.732745 kubelet[2221]: E0508 05:44:24.732697 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:24.833311 kubelet[2221]: E0508 05:44:24.833027 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:24.933129 kubelet[2221]: E0508 05:44:24.933077 2221 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:25.820638 kubelet[2221]: I0508 05:44:25.820564 2221 apiserver.go:52] "Watching apiserver" May 8 05:44:25.866187 kubelet[2221]: I0508 05:44:25.866140 2221 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 05:44:27.040231 systemd[1]: Reloading requested from client PID 2493 ('systemctl') (unit session-11.scope)... May 8 05:44:27.040263 systemd[1]: Reloading... May 8 05:44:27.150474 zram_generator::config[2532]: No configuration found. May 8 05:44:27.290968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:44:27.391585 systemd[1]: Reloading finished in 350 ms. May 8 05:44:27.429094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:27.429700 kubelet[2221]: I0508 05:44:27.429497 2221 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:44:27.442827 systemd[1]: kubelet.service: Deactivated successfully. May 8 05:44:27.443009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:27.443053 systemd[1]: kubelet.service: Consumed 1.360s CPU time, 117.5M memory peak, 0B memory swap peak. May 8 05:44:27.449103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:44:27.716799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:44:27.720074 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 05:44:27.854810 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:44:27.854810 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 05:44:27.854810 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:44:27.855181 kubelet[2596]: I0508 05:44:27.854924 2596 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 05:44:27.862345 kubelet[2596]: I0508 05:44:27.862314 2596 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 05:44:27.862345 kubelet[2596]: I0508 05:44:27.862337 2596 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 05:44:27.862593 kubelet[2596]: I0508 05:44:27.862569 2596 server.go:929] "Client rotation is on, will bootstrap in background" May 8 05:44:27.863922 kubelet[2596]: I0508 05:44:27.863899 2596 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 05:44:27.868131 kubelet[2596]: I0508 05:44:27.868089 2596 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:44:27.871713 kubelet[2596]: E0508 05:44:27.871672 2596 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 05:44:27.871713 kubelet[2596]: I0508 05:44:27.871702 2596 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 05:44:27.874625 kubelet[2596]: I0508 05:44:27.874598 2596 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 05:44:27.874703 kubelet[2596]: I0508 05:44:27.874689 2596 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 05:44:27.874818 kubelet[2596]: I0508 05:44:27.874784 2596 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 05:44:27.874993 kubelet[2596]: I0508 05:44:27.874811 2596 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-fbb7d486d2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 05:44:27.874993 kubelet[2596]: I0508 05:44:27.874991 2596 topology_manager.go:138] "Creating topology manager with none policy" May 8 05:44:27.875133 kubelet[2596]: I0508 05:44:27.875002 2596 container_manager_linux.go:300] "Creating device plugin manager" May 8 05:44:27.875133 kubelet[2596]: I0508 05:44:27.875031 2596 state_mem.go:36] "Initialized new in-memory state store" May 8 05:44:27.875133 kubelet[2596]: I0508 05:44:27.875112 2596 kubelet.go:408] "Attempting to sync node with API server" May 8 05:44:27.875133 kubelet[2596]: I0508 05:44:27.875123 2596 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 05:44:27.875227 kubelet[2596]: I0508 05:44:27.875145 2596 kubelet.go:314] "Adding apiserver pod source" May 8 05:44:27.875227 kubelet[2596]: I0508 05:44:27.875157 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 05:44:27.875779 kubelet[2596]: I0508 05:44:27.875763 2596 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 05:44:27.876134 kubelet[2596]: I0508 05:44:27.876118 2596 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 05:44:27.876507 kubelet[2596]: I0508 05:44:27.876489 2596 server.go:1269] "Started kubelet" May 8 05:44:27.879433 kubelet[2596]: I0508 05:44:27.878618 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 05:44:27.883079 kubelet[2596]: I0508 05:44:27.883042 2596 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 05:44:27.884261 kubelet[2596]: I0508 05:44:27.884239 2596 server.go:460] "Adding debug handlers to kubelet server" May 8 05:44:27.888233 kubelet[2596]: I0508 05:44:27.886896 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 05:44:27.888233 kubelet[2596]: I0508 05:44:27.887092 2596 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 05:44:27.888233 kubelet[2596]: I0508 05:44:27.887624 2596 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 05:44:27.891889 kubelet[2596]: I0508 05:44:27.891858 2596 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 05:44:27.893467 kubelet[2596]: E0508 05:44:27.893407 2596 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" not found" May 8 05:44:27.900341 kubelet[2596]: I0508 05:44:27.898636 2596 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 05:44:27.900341 kubelet[2596]: I0508 05:44:27.898761 2596 reconciler.go:26] "Reconciler: start to sync state" May 8 05:44:27.901964 kubelet[2596]: I0508 05:44:27.901932 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 05:44:27.907268 kubelet[2596]: I0508 05:44:27.907240 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 05:44:27.907350 kubelet[2596]: I0508 05:44:27.907288 2596 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 05:44:27.907350 kubelet[2596]: I0508 05:44:27.907307 2596 kubelet.go:2321] "Starting kubelet main sync loop" May 8 05:44:27.907407 kubelet[2596]: E0508 05:44:27.907347 2596 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 05:44:27.908170 kubelet[2596]: I0508 05:44:27.908151 2596 factory.go:221] Registration of the systemd container factory successfully May 8 05:44:27.908346 kubelet[2596]: I0508 05:44:27.908327 2596 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 05:44:27.916000 kubelet[2596]: E0508 05:44:27.915975 2596 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 05:44:27.919861 kubelet[2596]: I0508 05:44:27.919716 2596 factory.go:221] Registration of the containerd container factory successfully May 8 05:44:27.962143 kubelet[2596]: I0508 05:44:27.962121 2596 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 05:44:27.962343 kubelet[2596]: I0508 05:44:27.962329 2596 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 05:44:27.962410 kubelet[2596]: I0508 05:44:27.962401 2596 state_mem.go:36] "Initialized new in-memory state store" May 8 05:44:27.962652 kubelet[2596]: I0508 05:44:27.962636 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 05:44:27.962741 kubelet[2596]: I0508 05:44:27.962716 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 05:44:27.962800 kubelet[2596]: I0508 05:44:27.962792 2596 policy_none.go:49] "None policy: Start" May 8 05:44:27.963516 kubelet[2596]: I0508 05:44:27.963485 2596 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 05:44:27.963516 kubelet[2596]: I0508 05:44:27.963510 2596 state_mem.go:35] "Initializing new in-memory state store" May 8 05:44:27.963673 kubelet[2596]: I0508 05:44:27.963658 2596 state_mem.go:75] "Updated machine memory state" May 8 05:44:27.968322 kubelet[2596]: I0508 05:44:27.968249 2596 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 05:44:27.969104 kubelet[2596]: I0508 05:44:27.969093 2596 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 05:44:27.969744 kubelet[2596]: I0508 05:44:27.969231 2596 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 05:44:27.969744 kubelet[2596]: I0508 05:44:27.969556 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 05:44:28.024081 kubelet[2596]: W0508 05:44:28.023920 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:44:28.025146 kubelet[2596]: W0508 05:44:28.025124 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:44:28.025480 kubelet[2596]: W0508 05:44:28.025463 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:44:28.073525 kubelet[2596]: I0508 05:44:28.072280 2596 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.089390 kubelet[2596]: I0508 05:44:28.089367 2596 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.089675 kubelet[2596]: I0508 05:44:28.089562 2596 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.099940 kubelet[2596]: I0508 05:44:28.099857 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.099940 kubelet[2596]: I0508 05:44:28.099947 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100110 kubelet[2596]: I0508 05:44:28.100017 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100153 kubelet[2596]: I0508 05:44:28.100093 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100192 kubelet[2596]: I0508 05:44:28.100147 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0122da21cf2d9b7749a3693da6096c58-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"0122da21cf2d9b7749a3693da6096c58\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100219 kubelet[2596]: I0508 05:44:28.100190 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100794 kubelet[2596]: I0508 05:44:28.100233 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3db5aeb23db1f6e225605fc940b879a9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"3db5aeb23db1f6e225605fc940b879a9\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100794 kubelet[2596]: I0508 05:44:28.100286 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.100794 kubelet[2596]: I0508 05:44:28.100326 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb70355027aa88c21296b99cda0bae8f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal\" (UID: \"fb70355027aa88c21296b99cda0bae8f\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.882999 kubelet[2596]: I0508 05:44:28.882924 2596 apiserver.go:52] "Watching apiserver" May 8 05:44:28.898900 kubelet[2596]: I0508 05:44:28.898837 2596 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 05:44:28.952053 kubelet[2596]: W0508 05:44:28.952009 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:44:28.952163 kubelet[2596]: E0508 05:44:28.952123 2596 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:44:28.985296 kubelet[2596]: I0508 05:44:28.984991 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-fbb7d486d2.novalocal" podStartSLOduration=0.984972288 podStartE2EDuration="984.972288ms" podCreationTimestamp="2025-05-08 05:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:44:28.973330254 +0000 UTC m=+1.249931517" watchObservedRunningTime="2025-05-08 05:44:28.984972288 +0000 UTC m=+1.261573511" May 8 05:44:28.985296 kubelet[2596]: I0508 05:44:28.985091 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-fbb7d486d2.novalocal" podStartSLOduration=0.985086462 podStartE2EDuration="985.086462ms" podCreationTimestamp="2025-05-08 05:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:44:28.983778178 +0000 UTC m=+1.260379441" watchObservedRunningTime="2025-05-08 05:44:28.985086462 +0000 UTC m=+1.261687686" May 8 05:44:29.025883 kubelet[2596]: I0508 05:44:29.025822 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-fbb7d486d2.novalocal" podStartSLOduration=1.025803851 podStartE2EDuration="1.025803851s" podCreationTimestamp="2025-05-08 05:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:44:28.999369482 +0000 UTC m=+1.275970745" watchObservedRunningTime="2025-05-08 05:44:29.025803851 +0000 UTC m=+1.302405064" May 8 05:44:33.225762 kubelet[2596]: I0508 05:44:33.225662 2596 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 05:44:33.226681 containerd[1459]: time="2025-05-08T05:44:33.226562663Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 05:44:33.227344 kubelet[2596]: I0508 05:44:33.226838 2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 05:44:33.782045 systemd[1]: Created slice kubepods-besteffort-podbffafdc9_7c31_49e0_9e6b_8a44db4a3d71.slice - libcontainer container kubepods-besteffort-podbffafdc9_7c31_49e0_9e6b_8a44db4a3d71.slice. May 8 05:44:33.837634 kubelet[2596]: I0508 05:44:33.837571 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bffafdc9-7c31-49e0-9e6b-8a44db4a3d71-xtables-lock\") pod \"kube-proxy-7hkmk\" (UID: \"bffafdc9-7c31-49e0-9e6b-8a44db4a3d71\") " pod="kube-system/kube-proxy-7hkmk" May 8 05:44:33.837634 kubelet[2596]: I0508 05:44:33.837615 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bffafdc9-7c31-49e0-9e6b-8a44db4a3d71-lib-modules\") pod \"kube-proxy-7hkmk\" (UID: \"bffafdc9-7c31-49e0-9e6b-8a44db4a3d71\") " pod="kube-system/kube-proxy-7hkmk" May 8 05:44:33.839555 kubelet[2596]: I0508 05:44:33.837640 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhkd\" (UniqueName: \"kubernetes.io/projected/bffafdc9-7c31-49e0-9e6b-8a44db4a3d71-kube-api-access-znhkd\") pod \"kube-proxy-7hkmk\" (UID: \"bffafdc9-7c31-49e0-9e6b-8a44db4a3d71\") " pod="kube-system/kube-proxy-7hkmk" May 8 05:44:33.839555 kubelet[2596]: I0508 05:44:33.837666 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bffafdc9-7c31-49e0-9e6b-8a44db4a3d71-kube-proxy\") pod \"kube-proxy-7hkmk\" (UID: \"bffafdc9-7c31-49e0-9e6b-8a44db4a3d71\") " pod="kube-system/kube-proxy-7hkmk" May 8 05:44:33.979991 sudo[1719]: pam_unix(sudo:session): session closed for user root May 8 05:44:34.090378 containerd[1459]: time="2025-05-08T05:44:34.090181414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7hkmk,Uid:bffafdc9-7c31-49e0-9e6b-8a44db4a3d71,Namespace:kube-system,Attempt:0,}" May 8 05:44:34.129780 containerd[1459]: time="2025-05-08T05:44:34.128932835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:34.129780 containerd[1459]: time="2025-05-08T05:44:34.129079190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:34.129780 containerd[1459]: time="2025-05-08T05:44:34.129103054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:34.129780 containerd[1459]: time="2025-05-08T05:44:34.129364284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:34.166328 systemd[1]: run-containerd-runc-k8s.io-e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a-runc.SmauKi.mount: Deactivated successfully. May 8 05:44:34.183597 systemd[1]: Started cri-containerd-e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a.scope - libcontainer container e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a. May 8 05:44:34.194559 sshd[1716]: pam_unix(sshd:session): session closed for user core May 8 05:44:34.199570 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. May 8 05:44:34.200375 systemd[1]: sshd@8-172.24.4.135:22-172.24.4.1:49842.service: Deactivated successfully. May 8 05:44:34.202637 systemd[1]: session-11.scope: Deactivated successfully. May 8 05:44:34.202940 systemd[1]: session-11.scope: Consumed 6.591s CPU time, 157.3M memory peak, 0B memory swap peak. May 8 05:44:34.204746 systemd-logind[1450]: Removed session 11. May 8 05:44:34.211650 containerd[1459]: time="2025-05-08T05:44:34.211506750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7hkmk,Uid:bffafdc9-7c31-49e0-9e6b-8a44db4a3d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a\"" May 8 05:44:34.216486 containerd[1459]: time="2025-05-08T05:44:34.216401473Z" level=info msg="CreateContainer within sandbox \"e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 05:44:34.250000 containerd[1459]: time="2025-05-08T05:44:34.249928844Z" level=info msg="CreateContainer within sandbox \"e151c8050b9166d141602fc8a126ea713692203a74cb429921ec9a19fe2e167a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"811a5938a3b0144ae3b1bf81fd11b6c787255066db32a9cd40c413c8cfc38466\"" May 8 05:44:34.251310 containerd[1459]: time="2025-05-08T05:44:34.251256093Z" level=info msg="StartContainer for \"811a5938a3b0144ae3b1bf81fd11b6c787255066db32a9cd40c413c8cfc38466\"" May 8 05:44:34.279808 systemd[1]: Created slice kubepods-besteffort-podc3fe7704_e302_447e_ba6d_f155c5b54d7c.slice - libcontainer container kubepods-besteffort-podc3fe7704_e302_447e_ba6d_f155c5b54d7c.slice. May 8 05:44:34.294632 systemd[1]: Started cri-containerd-811a5938a3b0144ae3b1bf81fd11b6c787255066db32a9cd40c413c8cfc38466.scope - libcontainer container 811a5938a3b0144ae3b1bf81fd11b6c787255066db32a9cd40c413c8cfc38466. May 8 05:44:34.324336 containerd[1459]: time="2025-05-08T05:44:34.324200764Z" level=info msg="StartContainer for \"811a5938a3b0144ae3b1bf81fd11b6c787255066db32a9cd40c413c8cfc38466\" returns successfully" May 8 05:44:34.344192 kubelet[2596]: I0508 05:44:34.344083 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3fe7704-e302-447e-ba6d-f155c5b54d7c-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-xxcm4\" (UID: \"c3fe7704-e302-447e-ba6d-f155c5b54d7c\") " pod="tigera-operator/tigera-operator-6f6897fdc5-xxcm4" May 8 05:44:34.344192 kubelet[2596]: I0508 05:44:34.344129 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wn4v\" (UniqueName: \"kubernetes.io/projected/c3fe7704-e302-447e-ba6d-f155c5b54d7c-kube-api-access-8wn4v\") pod \"tigera-operator-6f6897fdc5-xxcm4\" (UID: \"c3fe7704-e302-447e-ba6d-f155c5b54d7c\") " pod="tigera-operator/tigera-operator-6f6897fdc5-xxcm4" May 8 05:44:34.587550 containerd[1459]: time="2025-05-08T05:44:34.587485237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-xxcm4,Uid:c3fe7704-e302-447e-ba6d-f155c5b54d7c,Namespace:tigera-operator,Attempt:0,}" May 8 05:44:34.642863 containerd[1459]: time="2025-05-08T05:44:34.642649670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:34.642863 containerd[1459]: time="2025-05-08T05:44:34.642708050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:34.642863 containerd[1459]: time="2025-05-08T05:44:34.642728468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:34.643990 containerd[1459]: time="2025-05-08T05:44:34.642802367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:34.666769 systemd[1]: Started cri-containerd-e21094fc29e3fc7c8de5e1c178db7301dc12152659f87cbfcdb6039d1f7697e4.scope - libcontainer container e21094fc29e3fc7c8de5e1c178db7301dc12152659f87cbfcdb6039d1f7697e4. May 8 05:44:34.720253 containerd[1459]: time="2025-05-08T05:44:34.720091768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-xxcm4,Uid:c3fe7704-e302-447e-ba6d-f155c5b54d7c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e21094fc29e3fc7c8de5e1c178db7301dc12152659f87cbfcdb6039d1f7697e4\"" May 8 05:44:34.723731 containerd[1459]: time="2025-05-08T05:44:34.723681032Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 05:44:35.002054 kubelet[2596]: I0508 05:44:35.001643 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7hkmk" podStartSLOduration=2.001612534 podStartE2EDuration="2.001612534s" podCreationTimestamp="2025-05-08 05:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:44:35.001308674 +0000 UTC m=+7.277909947" watchObservedRunningTime="2025-05-08 05:44:35.001612534 +0000 UTC m=+7.278213827" May 8 05:44:36.385843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532444106.mount: Deactivated successfully. May 8 05:44:37.131826 containerd[1459]: time="2025-05-08T05:44:37.131688493Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:37.133089 containerd[1459]: time="2025-05-08T05:44:37.132923520Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 05:44:37.134425 containerd[1459]: time="2025-05-08T05:44:37.134369713Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:37.137137 containerd[1459]: time="2025-05-08T05:44:37.137087912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:37.138030 containerd[1459]: time="2025-05-08T05:44:37.137901628Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.414179099s" May 8 05:44:37.138030 containerd[1459]: time="2025-05-08T05:44:37.137941613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 05:44:37.141076 containerd[1459]: time="2025-05-08T05:44:37.140960788Z" level=info msg="CreateContainer within sandbox \"e21094fc29e3fc7c8de5e1c178db7301dc12152659f87cbfcdb6039d1f7697e4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 05:44:37.165600 containerd[1459]: time="2025-05-08T05:44:37.165564935Z" level=info msg="CreateContainer within sandbox \"e21094fc29e3fc7c8de5e1c178db7301dc12152659f87cbfcdb6039d1f7697e4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"98b7e384b1203629ef91273cbc87d3b3909a79feb8ad46f2d7c8e1767713160d\"" May 8 05:44:37.166243 containerd[1459]: time="2025-05-08T05:44:37.166001764Z" level=info msg="StartContainer for \"98b7e384b1203629ef91273cbc87d3b3909a79feb8ad46f2d7c8e1767713160d\"" May 8 05:44:37.200594 systemd[1]: Started cri-containerd-98b7e384b1203629ef91273cbc87d3b3909a79feb8ad46f2d7c8e1767713160d.scope - libcontainer container 98b7e384b1203629ef91273cbc87d3b3909a79feb8ad46f2d7c8e1767713160d. May 8 05:44:37.227927 containerd[1459]: time="2025-05-08T05:44:37.227833860Z" level=info msg="StartContainer for \"98b7e384b1203629ef91273cbc87d3b3909a79feb8ad46f2d7c8e1767713160d\" returns successfully" May 8 05:44:38.014141 kubelet[2596]: I0508 05:44:38.014016 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-xxcm4" podStartSLOduration=1.597234268 podStartE2EDuration="4.013982167s" podCreationTimestamp="2025-05-08 05:44:34 +0000 UTC" firstStartedPulling="2025-05-08 05:44:34.722047006 +0000 UTC m=+6.998648219" lastFinishedPulling="2025-05-08 05:44:37.138794905 +0000 UTC m=+9.415396118" observedRunningTime="2025-05-08 05:44:38.012634309 +0000 UTC m=+10.289235573" watchObservedRunningTime="2025-05-08 05:44:38.013982167 +0000 UTC m=+10.290583431" May 8 05:44:40.577391 systemd[1]: Created slice kubepods-besteffort-pod135df6c8_ac95_4184_ab1d_8b185f471b6b.slice - libcontainer container kubepods-besteffort-pod135df6c8_ac95_4184_ab1d_8b185f471b6b.slice. May 8 05:44:40.666117 systemd[1]: Created slice kubepods-besteffort-poddb0c66d8_a349_4a7c_a2b1_4bc252479a68.slice - libcontainer container kubepods-besteffort-poddb0c66d8_a349_4a7c_a2b1_4bc252479a68.slice. May 8 05:44:40.681349 kubelet[2596]: I0508 05:44:40.681287 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/135df6c8-ac95-4184-ab1d-8b185f471b6b-typha-certs\") pod \"calico-typha-6b6f6d75b5-qwwg7\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " pod="calico-system/calico-typha-6b6f6d75b5-qwwg7" May 8 05:44:40.681349 kubelet[2596]: I0508 05:44:40.681330 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjlj\" (UniqueName: \"kubernetes.io/projected/135df6c8-ac95-4184-ab1d-8b185f471b6b-kube-api-access-fvjlj\") pod \"calico-typha-6b6f6d75b5-qwwg7\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " pod="calico-system/calico-typha-6b6f6d75b5-qwwg7" May 8 05:44:40.681973 kubelet[2596]: I0508 05:44:40.681365 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-net-dir\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.681973 kubelet[2596]: I0508 05:44:40.681386 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-flexvol-driver-host\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.681973 kubelet[2596]: I0508 05:44:40.681406 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db0c66d8-a349-4a7c-a2b1-4bc252479a68-tigera-ca-bundle\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.681973 kubelet[2596]: I0508 05:44:40.681426 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/135df6c8-ac95-4184-ab1d-8b185f471b6b-tigera-ca-bundle\") pod \"calico-typha-6b6f6d75b5-qwwg7\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " pod="calico-system/calico-typha-6b6f6d75b5-qwwg7" May 8 05:44:40.681973 kubelet[2596]: I0508 05:44:40.681461 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-lib-modules\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682112 kubelet[2596]: I0508 05:44:40.681480 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-policysync\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682112 kubelet[2596]: I0508 05:44:40.681507 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnwz\" (UniqueName: \"kubernetes.io/projected/db0c66d8-a349-4a7c-a2b1-4bc252479a68-kube-api-access-jqnwz\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682112 kubelet[2596]: I0508 05:44:40.681527 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-xtables-lock\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682112 kubelet[2596]: I0508 05:44:40.681544 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-bin-dir\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682112 kubelet[2596]: I0508 05:44:40.681564 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-log-dir\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682251 kubelet[2596]: I0508 05:44:40.681582 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db0c66d8-a349-4a7c-a2b1-4bc252479a68-node-certs\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682251 kubelet[2596]: I0508 05:44:40.681600 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-run-calico\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.682251 kubelet[2596]: I0508 05:44:40.681618 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-lib-calico\") pod \"calico-node-4phqr\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " pod="calico-system/calico-node-4phqr" May 8 05:44:40.778477 kubelet[2596]: E0508 05:44:40.777975 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:40.789694 kubelet[2596]: E0508 05:44:40.789529 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.789694 kubelet[2596]: W0508 05:44:40.789557 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.789694 kubelet[2596]: E0508 05:44:40.789590 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.790062 kubelet[2596]: E0508 05:44:40.789969 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.790062 kubelet[2596]: W0508 05:44:40.789982 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.790062 kubelet[2596]: E0508 05:44:40.789998 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.792191 kubelet[2596]: E0508 05:44:40.792116 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.792191 kubelet[2596]: W0508 05:44:40.792128 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.792360 kubelet[2596]: E0508 05:44:40.792347 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.792571 kubelet[2596]: W0508 05:44:40.792428 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.792712 kubelet[2596]: E0508 05:44:40.792394 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.792758 kubelet[2596]: E0508 05:44:40.792731 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.792913 kubelet[2596]: E0508 05:44:40.792809 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.792913 kubelet[2596]: W0508 05:44:40.792822 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.792913 kubelet[2596]: E0508 05:44:40.792870 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.793284 kubelet[2596]: E0508 05:44:40.793100 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.793284 kubelet[2596]: W0508 05:44:40.793112 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.793284 kubelet[2596]: E0508 05:44:40.793128 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.793457 kubelet[2596]: E0508 05:44:40.793425 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.793532 kubelet[2596]: W0508 05:44:40.793519 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.793601 kubelet[2596]: E0508 05:44:40.793586 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.793870 kubelet[2596]: E0508 05:44:40.793849 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.793870 kubelet[2596]: W0508 05:44:40.793865 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.793945 kubelet[2596]: E0508 05:44:40.793885 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.797168 kubelet[2596]: E0508 05:44:40.797138 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.797168 kubelet[2596]: W0508 05:44:40.797154 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.798393 kubelet[2596]: E0508 05:44:40.798175 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.799333 kubelet[2596]: E0508 05:44:40.799313 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.799333 kubelet[2596]: W0508 05:44:40.799330 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.799410 kubelet[2596]: E0508 05:44:40.799350 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.800561 kubelet[2596]: E0508 05:44:40.800540 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.800561 kubelet[2596]: W0508 05:44:40.800554 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.800686 kubelet[2596]: E0508 05:44:40.800669 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.800917 kubelet[2596]: E0508 05:44:40.800898 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.800917 kubelet[2596]: W0508 05:44:40.800915 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.800990 kubelet[2596]: E0508 05:44:40.800942 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.801364 kubelet[2596]: E0508 05:44:40.801340 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.801364 kubelet[2596]: W0508 05:44:40.801356 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.801464 kubelet[2596]: E0508 05:44:40.801366 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.810466 kubelet[2596]: E0508 05:44:40.809957 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.810466 kubelet[2596]: W0508 05:44:40.809974 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.810466 kubelet[2596]: E0508 05:44:40.809990 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.818689 kubelet[2596]: E0508 05:44:40.818656 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.818689 kubelet[2596]: W0508 05:44:40.818678 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.818845 kubelet[2596]: E0508 05:44:40.818696 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.827091 kubelet[2596]: E0508 05:44:40.827015 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.827091 kubelet[2596]: W0508 05:44:40.827032 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.827091 kubelet[2596]: E0508 05:44:40.827049 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.883534 kubelet[2596]: E0508 05:44:40.883503 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.883534 kubelet[2596]: W0508 05:44:40.883524 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.883534 kubelet[2596]: E0508 05:44:40.883543 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.883730 kubelet[2596]: I0508 05:44:40.883577 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71d8c7d2-10e7-4c65-9044-49340af78942-registration-dir\") pod \"csi-node-driver-dxgvc\" (UID: \"71d8c7d2-10e7-4c65-9044-49340af78942\") " pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:40.883730 kubelet[2596]: E0508 05:44:40.883780 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.883730 kubelet[2596]: W0508 05:44:40.883791 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.883730 kubelet[2596]: E0508 05:44:40.883836 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.883730 kubelet[2596]: I0508 05:44:40.883855 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71d8c7d2-10e7-4c65-9044-49340af78942-socket-dir\") pod \"csi-node-driver-dxgvc\" (UID: \"71d8c7d2-10e7-4c65-9044-49340af78942\") " pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:40.884222 kubelet[2596]: E0508 05:44:40.884137 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.884222 kubelet[2596]: W0508 05:44:40.884148 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.884222 kubelet[2596]: E0508 05:44:40.884159 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.884222 kubelet[2596]: I0508 05:44:40.884174 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71d8c7d2-10e7-4c65-9044-49340af78942-kubelet-dir\") pod \"csi-node-driver-dxgvc\" (UID: \"71d8c7d2-10e7-4c65-9044-49340af78942\") " pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:40.886106 kubelet[2596]: E0508 05:44:40.886084 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.886106 kubelet[2596]: W0508 05:44:40.886100 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.886196 kubelet[2596]: E0508 05:44:40.886151 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.886196 kubelet[2596]: I0508 05:44:40.886172 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71d8c7d2-10e7-4c65-9044-49340af78942-varrun\") pod \"csi-node-driver-dxgvc\" (UID: \"71d8c7d2-10e7-4c65-9044-49340af78942\") " pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:40.886646 containerd[1459]: time="2025-05-08T05:44:40.886611550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6f6d75b5-qwwg7,Uid:135df6c8-ac95-4184-ab1d-8b185f471b6b,Namespace:calico-system,Attempt:0,}" May 8 05:44:40.887081 kubelet[2596]: E0508 05:44:40.887063 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.887081 kubelet[2596]: W0508 05:44:40.887077 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.887215 kubelet[2596]: E0508 05:44:40.887195 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.887428 kubelet[2596]: E0508 05:44:40.887247 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.887428 kubelet[2596]: W0508 05:44:40.887255 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.887428 kubelet[2596]: E0508 05:44:40.887343 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.887798 kubelet[2596]: E0508 05:44:40.887597 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.887798 kubelet[2596]: W0508 05:44:40.887622 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.887798 kubelet[2596]: E0508 05:44:40.887675 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.887915 kubelet[2596]: E0508 05:44:40.887838 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.887915 kubelet[2596]: W0508 05:44:40.887848 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.888082 kubelet[2596]: E0508 05:44:40.887930 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.888082 kubelet[2596]: I0508 05:44:40.887954 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjjq\" (UniqueName: \"kubernetes.io/projected/71d8c7d2-10e7-4c65-9044-49340af78942-kube-api-access-pxjjq\") pod \"csi-node-driver-dxgvc\" (UID: \"71d8c7d2-10e7-4c65-9044-49340af78942\") " pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:40.888155 kubelet[2596]: E0508 05:44:40.888107 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.888155 kubelet[2596]: W0508 05:44:40.888116 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.888456 kubelet[2596]: E0508 05:44:40.888210 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.888456 kubelet[2596]: E0508 05:44:40.888324 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.888456 kubelet[2596]: W0508 05:44:40.888333 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.888456 kubelet[2596]: E0508 05:44:40.888343 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.888711 kubelet[2596]: E0508 05:44:40.888611 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.888711 kubelet[2596]: W0508 05:44:40.888620 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.888711 kubelet[2596]: E0508 05:44:40.888660 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.888923 kubelet[2596]: E0508 05:44:40.888870 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.888923 kubelet[2596]: W0508 05:44:40.888880 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.888923 kubelet[2596]: E0508 05:44:40.888891 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.889292 kubelet[2596]: E0508 05:44:40.889173 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.889292 kubelet[2596]: W0508 05:44:40.889187 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.889292 kubelet[2596]: E0508 05:44:40.889197 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.889579 kubelet[2596]: E0508 05:44:40.889481 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.889579 kubelet[2596]: W0508 05:44:40.889491 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.889579 kubelet[2596]: E0508 05:44:40.889513 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.890204 kubelet[2596]: E0508 05:44:40.890133 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.890204 kubelet[2596]: W0508 05:44:40.890143 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.890204 kubelet[2596]: E0508 05:44:40.890153 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.930418 containerd[1459]: time="2025-05-08T05:44:40.929510904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:40.930418 containerd[1459]: time="2025-05-08T05:44:40.929877131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:40.930418 containerd[1459]: time="2025-05-08T05:44:40.929906717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:40.933135 containerd[1459]: time="2025-05-08T05:44:40.931644406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:40.961625 systemd[1]: Started cri-containerd-d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848.scope - libcontainer container d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848. May 8 05:44:40.973895 containerd[1459]: time="2025-05-08T05:44:40.972325459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4phqr,Uid:db0c66d8-a349-4a7c-a2b1-4bc252479a68,Namespace:calico-system,Attempt:0,}" May 8 05:44:40.989523 kubelet[2596]: E0508 05:44:40.989491 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.989523 kubelet[2596]: W0508 05:44:40.989512 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.990193 kubelet[2596]: E0508 05:44:40.989532 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.990193 kubelet[2596]: E0508 05:44:40.989863 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.990193 kubelet[2596]: W0508 05:44:40.989874 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.990193 kubelet[2596]: E0508 05:44:40.989890 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.990193 kubelet[2596]: E0508 05:44:40.990041 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.990193 kubelet[2596]: W0508 05:44:40.990149 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.990193 kubelet[2596]: E0508 05:44:40.990169 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.990414 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.991213 kubelet[2596]: W0508 05:44:40.990423 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.990433 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.990684 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.991213 kubelet[2596]: W0508 05:44:40.990693 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.990709 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.990983 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.991213 kubelet[2596]: W0508 05:44:40.990993 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.991046 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.991213 kubelet[2596]: E0508 05:44:40.991168 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.993612 kubelet[2596]: W0508 05:44:40.991177 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.993612 kubelet[2596]: E0508 05:44:40.991478 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.993824 kubelet[2596]: E0508 05:44:40.993711 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.993824 kubelet[2596]: W0508 05:44:40.993724 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.993824 kubelet[2596]: E0508 05:44:40.993743 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.994021 kubelet[2596]: E0508 05:44:40.994009 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.994153 kubelet[2596]: W0508 05:44:40.994075 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.994153 kubelet[2596]: E0508 05:44:40.994106 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.994378 kubelet[2596]: E0508 05:44:40.994278 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.994378 kubelet[2596]: W0508 05:44:40.994289 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.994378 kubelet[2596]: E0508 05:44:40.994315 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.994892 kubelet[2596]: E0508 05:44:40.994565 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.994892 kubelet[2596]: W0508 05:44:40.994577 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.994892 kubelet[2596]: E0508 05:44:40.994603 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.995046 kubelet[2596]: E0508 05:44:40.995034 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.995550 kubelet[2596]: W0508 05:44:40.995464 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.995550 kubelet[2596]: E0508 05:44:40.995497 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.995755 kubelet[2596]: E0508 05:44:40.995670 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.995755 kubelet[2596]: W0508 05:44:40.995681 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.995755 kubelet[2596]: E0508 05:44:40.995704 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.996571 kubelet[2596]: E0508 05:44:40.996486 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.996571 kubelet[2596]: W0508 05:44:40.996497 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.996571 kubelet[2596]: E0508 05:44:40.996523 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.996856 kubelet[2596]: E0508 05:44:40.996767 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.996856 kubelet[2596]: W0508 05:44:40.996778 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.997146 kubelet[2596]: E0508 05:44:40.996996 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.997146 kubelet[2596]: E0508 05:44:40.997047 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.997146 kubelet[2596]: W0508 05:44:40.997056 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.997246 kubelet[2596]: E0508 05:44:40.997088 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.997872 kubelet[2596]: E0508 05:44:40.997698 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.997872 kubelet[2596]: W0508 05:44:40.997709 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.997872 kubelet[2596]: E0508 05:44:40.997777 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.998176 kubelet[2596]: E0508 05:44:40.997959 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.998176 kubelet[2596]: W0508 05:44:40.997968 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.998499 kubelet[2596]: E0508 05:44:40.998333 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.998499 kubelet[2596]: W0508 05:44:40.998344 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.998571 kubelet[2596]: E0508 05:44:40.998529 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.998571 kubelet[2596]: E0508 05:44:40.998563 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.998831 kubelet[2596]: E0508 05:44:40.998667 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.998831 kubelet[2596]: W0508 05:44:40.998679 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.998831 kubelet[2596]: E0508 05:44:40.998748 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.999160 kubelet[2596]: E0508 05:44:40.999048 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.999160 kubelet[2596]: W0508 05:44:40.999060 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:40.999160 kubelet[2596]: E0508 05:44:40.999091 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:40.999420 kubelet[2596]: E0508 05:44:40.999381 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:40.999420 kubelet[2596]: W0508 05:44:40.999393 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:41.000066 kubelet[2596]: E0508 05:44:40.999714 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:41.000066 kubelet[2596]: W0508 05:44:40.999726 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:41.000066 kubelet[2596]: E0508 05:44:40.999739 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:41.000066 kubelet[2596]: E0508 05:44:40.999968 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:41.000342 kubelet[2596]: E0508 05:44:41.000332 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:41.000416 kubelet[2596]: W0508 05:44:41.000405 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:41.000524 kubelet[2596]: E0508 05:44:41.000512 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:41.000777 kubelet[2596]: E0508 05:44:41.000765 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:41.000845 kubelet[2596]: W0508 05:44:41.000824 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:41.000900 kubelet[2596]: E0508 05:44:41.000889 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:41.018547 kubelet[2596]: E0508 05:44:41.018461 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:41.018547 kubelet[2596]: W0508 05:44:41.018483 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:41.018547 kubelet[2596]: E0508 05:44:41.018505 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:41.025947 containerd[1459]: time="2025-05-08T05:44:41.025847370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:44:41.026211 containerd[1459]: time="2025-05-08T05:44:41.026181477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:44:41.027041 containerd[1459]: time="2025-05-08T05:44:41.027006254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:41.027537 containerd[1459]: time="2025-05-08T05:44:41.027474273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:44:41.035850 containerd[1459]: time="2025-05-08T05:44:41.035769383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6f6d75b5-qwwg7,Uid:135df6c8-ac95-4184-ab1d-8b185f471b6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\"" May 8 05:44:41.039623 containerd[1459]: time="2025-05-08T05:44:41.039529316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 05:44:41.057706 systemd[1]: Started cri-containerd-5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f.scope - libcontainer container 5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f. May 8 05:44:41.088583 containerd[1459]: time="2025-05-08T05:44:41.088548870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4phqr,Uid:db0c66d8-a349-4a7c-a2b1-4bc252479a68,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\"" May 8 05:44:42.908639 kubelet[2596]: E0508 05:44:42.908150 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:44.442433 containerd[1459]: time="2025-05-08T05:44:44.442379245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:44.443718 containerd[1459]: time="2025-05-08T05:44:44.443531095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 05:44:44.444959 containerd[1459]: time="2025-05-08T05:44:44.444902689Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:44.447537 containerd[1459]: time="2025-05-08T05:44:44.447490973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:44.448361 containerd[1459]: time="2025-05-08T05:44:44.448232334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.408669816s" May 8 05:44:44.448361 containerd[1459]: time="2025-05-08T05:44:44.448273311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 05:44:44.450059 containerd[1459]: time="2025-05-08T05:44:44.450005871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 05:44:44.462640 containerd[1459]: time="2025-05-08T05:44:44.462609673Z" level=info msg="CreateContainer within sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 05:44:44.489942 containerd[1459]: time="2025-05-08T05:44:44.489895435Z" level=info msg="CreateContainer within sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\"" May 8 05:44:44.491081 containerd[1459]: time="2025-05-08T05:44:44.490559332Z" level=info msg="StartContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\"" May 8 05:44:44.519642 systemd[1]: Started cri-containerd-da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed.scope - libcontainer container da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed. May 8 05:44:44.591521 containerd[1459]: time="2025-05-08T05:44:44.591394719Z" level=info msg="StartContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" returns successfully" May 8 05:44:44.909922 kubelet[2596]: E0508 05:44:44.909798 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:45.082524 kubelet[2596]: I0508 05:44:45.082364 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b6f6d75b5-qwwg7" podStartSLOduration=1.67190531 podStartE2EDuration="5.082330538s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:44:41.038784319 +0000 UTC m=+13.315385532" lastFinishedPulling="2025-05-08 05:44:44.449209537 +0000 UTC m=+16.725810760" observedRunningTime="2025-05-08 05:44:45.050663858 +0000 UTC m=+17.327265121" watchObservedRunningTime="2025-05-08 05:44:45.082330538 +0000 UTC m=+17.358931801" May 8 05:44:45.111732 kubelet[2596]: E0508 05:44:45.111646 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.111842 kubelet[2596]: W0508 05:44:45.111732 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.111842 kubelet[2596]: E0508 05:44:45.111809 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.112384 kubelet[2596]: E0508 05:44:45.112339 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.112456 kubelet[2596]: W0508 05:44:45.112386 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.112650 kubelet[2596]: E0508 05:44:45.112500 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.114617 kubelet[2596]: E0508 05:44:45.114547 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.114617 kubelet[2596]: W0508 05:44:45.114615 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.114718 kubelet[2596]: E0508 05:44:45.114638 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.115089 kubelet[2596]: E0508 05:44:45.115063 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.115089 kubelet[2596]: W0508 05:44:45.115084 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.115184 kubelet[2596]: E0508 05:44:45.115101 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.115582 kubelet[2596]: E0508 05:44:45.115411 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.115644 kubelet[2596]: W0508 05:44:45.115580 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.115772 kubelet[2596]: E0508 05:44:45.115736 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.116176 kubelet[2596]: E0508 05:44:45.116151 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.116176 kubelet[2596]: W0508 05:44:45.116174 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.116269 kubelet[2596]: E0508 05:44:45.116192 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.116804 kubelet[2596]: E0508 05:44:45.116779 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.116804 kubelet[2596]: W0508 05:44:45.116801 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.116918 kubelet[2596]: E0508 05:44:45.116843 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.117689 kubelet[2596]: E0508 05:44:45.117662 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.117689 kubelet[2596]: W0508 05:44:45.117688 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.118015 kubelet[2596]: E0508 05:44:45.117709 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.118080 kubelet[2596]: E0508 05:44:45.118060 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.118080 kubelet[2596]: W0508 05:44:45.118074 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.118142 kubelet[2596]: E0508 05:44:45.118084 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.118287 kubelet[2596]: E0508 05:44:45.118219 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.118287 kubelet[2596]: W0508 05:44:45.118230 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.118287 kubelet[2596]: E0508 05:44:45.118239 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.118376 kubelet[2596]: E0508 05:44:45.118359 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.118376 kubelet[2596]: W0508 05:44:45.118368 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.118423 kubelet[2596]: E0508 05:44:45.118376 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.118677 kubelet[2596]: E0508 05:44:45.118526 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.118677 kubelet[2596]: W0508 05:44:45.118540 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.118677 kubelet[2596]: E0508 05:44:45.118548 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.118740 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120469 kubelet[2596]: W0508 05:44:45.118749 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.118757 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.119016 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120469 kubelet[2596]: W0508 05:44:45.119025 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.119034 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.119169 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120469 kubelet[2596]: W0508 05:44:45.119177 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.119185 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120469 kubelet[2596]: E0508 05:44:45.119403 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120768 kubelet[2596]: W0508 05:44:45.119412 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.119421 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.119644 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120768 kubelet[2596]: W0508 05:44:45.119652 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.119664 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.119840 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120768 kubelet[2596]: W0508 05:44:45.119850 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.119862 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.120768 kubelet[2596]: E0508 05:44:45.120017 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.120768 kubelet[2596]: W0508 05:44:45.120025 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120047 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120197 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121064 kubelet[2596]: W0508 05:44:45.120206 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120219 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120367 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121064 kubelet[2596]: W0508 05:44:45.120376 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120392 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120606 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121064 kubelet[2596]: W0508 05:44:45.120615 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121064 kubelet[2596]: E0508 05:44:45.120637 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121293 kubelet[2596]: E0508 05:44:45.120954 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121293 kubelet[2596]: W0508 05:44:45.120963 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121293 kubelet[2596]: E0508 05:44:45.121050 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121293 kubelet[2596]: E0508 05:44:45.121145 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121293 kubelet[2596]: W0508 05:44:45.121154 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121293 kubelet[2596]: E0508 05:44:45.121239 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121450 kubelet[2596]: E0508 05:44:45.121329 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121450 kubelet[2596]: W0508 05:44:45.121337 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121450 kubelet[2596]: E0508 05:44:45.121348 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121528 kubelet[2596]: E0508 05:44:45.121518 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121558 kubelet[2596]: W0508 05:44:45.121527 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121558 kubelet[2596]: E0508 05:44:45.121549 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121720 kubelet[2596]: E0508 05:44:45.121704 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121720 kubelet[2596]: W0508 05:44:45.121716 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.121785 kubelet[2596]: E0508 05:44:45.121728 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.121927 kubelet[2596]: E0508 05:44:45.121905 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.121927 kubelet[2596]: W0508 05:44:45.121918 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.122209 kubelet[2596]: E0508 05:44:45.121931 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.122209 kubelet[2596]: E0508 05:44:45.122084 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.122209 kubelet[2596]: W0508 05:44:45.122093 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.122209 kubelet[2596]: E0508 05:44:45.122102 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.122331 kubelet[2596]: E0508 05:44:45.122228 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.122331 kubelet[2596]: W0508 05:44:45.122236 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.122331 kubelet[2596]: E0508 05:44:45.122259 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.122795 kubelet[2596]: E0508 05:44:45.122416 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.122795 kubelet[2596]: W0508 05:44:45.122429 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.122795 kubelet[2596]: E0508 05:44:45.122493 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.122898 kubelet[2596]: E0508 05:44:45.122849 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.122898 kubelet[2596]: W0508 05:44:45.122858 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.122898 kubelet[2596]: E0508 05:44:45.122874 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:45.123062 kubelet[2596]: E0508 05:44:45.123046 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:45.123062 kubelet[2596]: W0508 05:44:45.123058 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:45.123127 kubelet[2596]: E0508 05:44:45.123068 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.026936 kubelet[2596]: E0508 05:44:46.026886 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.026936 kubelet[2596]: W0508 05:44:46.026925 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.027819 kubelet[2596]: E0508 05:44:46.026957 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.027819 kubelet[2596]: E0508 05:44:46.027347 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.027819 kubelet[2596]: W0508 05:44:46.027369 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.027819 kubelet[2596]: E0508 05:44:46.027392 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.028113 kubelet[2596]: E0508 05:44:46.027873 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.028113 kubelet[2596]: W0508 05:44:46.027895 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.028113 kubelet[2596]: E0508 05:44:46.027919 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.028347 kubelet[2596]: E0508 05:44:46.028288 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.028347 kubelet[2596]: W0508 05:44:46.028311 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.028347 kubelet[2596]: E0508 05:44:46.028337 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.029147 kubelet[2596]: E0508 05:44:46.029087 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.029256 kubelet[2596]: W0508 05:44:46.029178 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.029256 kubelet[2596]: E0508 05:44:46.029206 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.030951 kubelet[2596]: E0508 05:44:46.030017 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.030951 kubelet[2596]: W0508 05:44:46.030671 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.030951 kubelet[2596]: E0508 05:44:46.030717 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.032205 kubelet[2596]: E0508 05:44:46.031977 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.032205 kubelet[2596]: W0508 05:44:46.032005 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.032205 kubelet[2596]: E0508 05:44:46.032031 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.032908 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.034863 kubelet[2596]: W0508 05:44:46.032935 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.032961 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.033357 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.034863 kubelet[2596]: W0508 05:44:46.033378 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.033399 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.033920 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.034863 kubelet[2596]: W0508 05:44:46.033942 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.033964 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.034863 kubelet[2596]: E0508 05:44:46.034373 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.036096 kubelet[2596]: W0508 05:44:46.034395 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.034417 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.034925 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.036096 kubelet[2596]: W0508 05:44:46.034948 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.034972 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.035397 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.036096 kubelet[2596]: W0508 05:44:46.035421 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.035475 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.036096 kubelet[2596]: E0508 05:44:46.035893 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.036096 kubelet[2596]: W0508 05:44:46.035915 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.037151 kubelet[2596]: E0508 05:44:46.035937 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.037151 kubelet[2596]: E0508 05:44:46.036294 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.037151 kubelet[2596]: W0508 05:44:46.036315 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.037151 kubelet[2596]: E0508 05:44:46.036341 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.128380 kubelet[2596]: E0508 05:44:46.128081 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.128380 kubelet[2596]: W0508 05:44:46.128121 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.128380 kubelet[2596]: E0508 05:44:46.128157 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.129174 kubelet[2596]: E0508 05:44:46.129003 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.129174 kubelet[2596]: W0508 05:44:46.129031 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.129174 kubelet[2596]: E0508 05:44:46.129071 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.130662 kubelet[2596]: E0508 05:44:46.130567 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.130662 kubelet[2596]: W0508 05:44:46.130598 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.131206 kubelet[2596]: E0508 05:44:46.130789 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.131410 kubelet[2596]: E0508 05:44:46.131381 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.131628 kubelet[2596]: W0508 05:44:46.131598 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.132421 kubelet[2596]: E0508 05:44:46.132147 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.133027 kubelet[2596]: E0508 05:44:46.132959 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.133325 kubelet[2596]: W0508 05:44:46.133233 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.133771 kubelet[2596]: E0508 05:44:46.133658 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.134170 kubelet[2596]: E0508 05:44:46.134126 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.134170 kubelet[2596]: W0508 05:44:46.134164 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.134434 kubelet[2596]: E0508 05:44:46.134386 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.134763 kubelet[2596]: E0508 05:44:46.134687 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.134763 kubelet[2596]: W0508 05:44:46.134719 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.135305 kubelet[2596]: E0508 05:44:46.134983 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.135305 kubelet[2596]: E0508 05:44:46.135038 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.135305 kubelet[2596]: W0508 05:44:46.135060 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.135305 kubelet[2596]: E0508 05:44:46.135085 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.136495 kubelet[2596]: E0508 05:44:46.135830 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.136495 kubelet[2596]: W0508 05:44:46.135854 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.136495 kubelet[2596]: E0508 05:44:46.135907 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.136495 kubelet[2596]: E0508 05:44:46.136351 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.136495 kubelet[2596]: W0508 05:44:46.136378 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.136764 kubelet[2596]: E0508 05:44:46.136656 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.137169 kubelet[2596]: E0508 05:44:46.137116 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.137169 kubelet[2596]: W0508 05:44:46.137154 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.137379 kubelet[2596]: E0508 05:44:46.137327 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.137571 kubelet[2596]: E0508 05:44:46.137542 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.137657 kubelet[2596]: W0508 05:44:46.137570 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.137925 kubelet[2596]: E0508 05:44:46.137790 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.138041 kubelet[2596]: E0508 05:44:46.138001 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.138041 kubelet[2596]: W0508 05:44:46.138034 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.138233 kubelet[2596]: E0508 05:44:46.138179 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.138522 kubelet[2596]: E0508 05:44:46.138492 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.138617 kubelet[2596]: W0508 05:44:46.138523 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.138617 kubelet[2596]: E0508 05:44:46.138559 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.139270 kubelet[2596]: E0508 05:44:46.139058 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.139270 kubelet[2596]: W0508 05:44:46.139081 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.139270 kubelet[2596]: E0508 05:44:46.139109 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.139983 kubelet[2596]: E0508 05:44:46.139398 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.139983 kubelet[2596]: W0508 05:44:46.139414 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.139983 kubelet[2596]: E0508 05:44:46.139474 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.140551 kubelet[2596]: E0508 05:44:46.140305 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.140551 kubelet[2596]: W0508 05:44:46.140333 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.140551 kubelet[2596]: E0508 05:44:46.140378 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.141230 kubelet[2596]: E0508 05:44:46.141135 2596 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:44:46.141230 kubelet[2596]: W0508 05:44:46.141162 2596 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:44:46.141230 kubelet[2596]: E0508 05:44:46.141185 2596 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:44:46.573003 containerd[1459]: time="2025-05-08T05:44:46.572941635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:46.574489 containerd[1459]: time="2025-05-08T05:44:46.574450025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 05:44:46.576064 containerd[1459]: time="2025-05-08T05:44:46.575997287Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:46.578563 containerd[1459]: time="2025-05-08T05:44:46.578511613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:46.579348 containerd[1459]: time="2025-05-08T05:44:46.579214502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.129174577s" May 8 05:44:46.579348 containerd[1459]: time="2025-05-08T05:44:46.579257252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 05:44:46.582416 containerd[1459]: time="2025-05-08T05:44:46.582057435Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 05:44:46.602012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307469522.mount: Deactivated successfully. May 8 05:44:46.606887 containerd[1459]: time="2025-05-08T05:44:46.606799094Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\"" May 8 05:44:46.607742 containerd[1459]: time="2025-05-08T05:44:46.607629161Z" level=info msg="StartContainer for \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\"" May 8 05:44:46.647573 systemd[1]: Started cri-containerd-f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c.scope - libcontainer container f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c. May 8 05:44:46.679792 containerd[1459]: time="2025-05-08T05:44:46.679649364Z" level=info msg="StartContainer for \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\" returns successfully" May 8 05:44:46.692340 systemd[1]: cri-containerd-f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c.scope: Deactivated successfully. May 8 05:44:46.716765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c-rootfs.mount: Deactivated successfully. May 8 05:44:46.908942 kubelet[2596]: E0508 05:44:46.908789 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:47.356253 containerd[1459]: time="2025-05-08T05:44:47.355600396Z" level=info msg="shim disconnected" id=f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c namespace=k8s.io May 8 05:44:47.356253 containerd[1459]: time="2025-05-08T05:44:47.355710693Z" level=warning msg="cleaning up after shim disconnected" id=f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c namespace=k8s.io May 8 05:44:47.356253 containerd[1459]: time="2025-05-08T05:44:47.355733736Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:44:48.036978 containerd[1459]: time="2025-05-08T05:44:48.036902589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 05:44:48.908619 kubelet[2596]: E0508 05:44:48.908410 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:50.908019 kubelet[2596]: E0508 05:44:50.907674 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:52.908561 kubelet[2596]: E0508 05:44:52.908517 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:54.300510 containerd[1459]: time="2025-05-08T05:44:54.300408765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:54.302399 containerd[1459]: time="2025-05-08T05:44:54.302300224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 05:44:54.303798 containerd[1459]: time="2025-05-08T05:44:54.303669712Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:54.307082 containerd[1459]: time="2025-05-08T05:44:54.306982866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:44:54.308147 containerd[1459]: time="2025-05-08T05:44:54.307879409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.270911487s" May 8 05:44:54.308147 containerd[1459]: time="2025-05-08T05:44:54.307912511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 05:44:54.311167 containerd[1459]: time="2025-05-08T05:44:54.311049444Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 05:44:54.331727 containerd[1459]: time="2025-05-08T05:44:54.331631809Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\"" May 8 05:44:54.334041 containerd[1459]: time="2025-05-08T05:44:54.332464201Z" level=info msg="StartContainer for \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\"" May 8 05:44:54.372584 systemd[1]: Started cri-containerd-96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7.scope - libcontainer container 96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7. May 8 05:44:54.403707 containerd[1459]: time="2025-05-08T05:44:54.403662104Z" level=info msg="StartContainer for \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\" returns successfully" May 8 05:44:54.907814 kubelet[2596]: E0508 05:44:54.907690 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:55.647379 containerd[1459]: time="2025-05-08T05:44:55.647077813Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 05:44:55.654619 systemd[1]: cri-containerd-96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7.scope: Deactivated successfully. May 8 05:44:55.691933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7-rootfs.mount: Deactivated successfully. May 8 05:44:55.758487 kubelet[2596]: I0508 05:44:55.756910 2596 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 05:44:56.155399 systemd[1]: Created slice kubepods-burstable-pod8e593293_d978_473a_ae19_5154bba363a6.slice - libcontainer container kubepods-burstable-pod8e593293_d978_473a_ae19_5154bba363a6.slice. May 8 05:44:56.214739 kubelet[2596]: I0508 05:44:56.213858 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e593293-d978-473a-ae19-5154bba363a6-config-volume\") pod \"coredns-6f6b679f8f-v9cpg\" (UID: \"8e593293-d978-473a-ae19-5154bba363a6\") " pod="kube-system/coredns-6f6b679f8f-v9cpg" May 8 05:44:56.214739 kubelet[2596]: I0508 05:44:56.214019 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcszp\" (UniqueName: \"kubernetes.io/projected/8e593293-d978-473a-ae19-5154bba363a6-kube-api-access-gcszp\") pod \"coredns-6f6b679f8f-v9cpg\" (UID: \"8e593293-d978-473a-ae19-5154bba363a6\") " pod="kube-system/coredns-6f6b679f8f-v9cpg" May 8 05:44:56.358150 systemd[1]: Created slice kubepods-burstable-podfa364b74_657d_49c1_9a18_1f21f741d4df.slice - libcontainer container kubepods-burstable-podfa364b74_657d_49c1_9a18_1f21f741d4df.slice. May 8 05:44:56.397602 systemd[1]: Created slice kubepods-besteffort-podd3c40b71_a013_43d8_b8d8_e3eec48008e2.slice - libcontainer container kubepods-besteffort-podd3c40b71_a013_43d8_b8d8_e3eec48008e2.slice. May 8 05:44:56.405103 systemd[1]: Created slice kubepods-besteffort-pod0c6acdca_0e5b_443d_8401_07d05363600e.slice - libcontainer container kubepods-besteffort-pod0c6acdca_0e5b_443d_8401_07d05363600e.slice. May 8 05:44:56.412263 systemd[1]: Created slice kubepods-besteffort-pod2af1a327_7716_4aad_bb55_55682f8973c2.slice - libcontainer container kubepods-besteffort-pod2af1a327_7716_4aad_bb55_55682f8973c2.slice. May 8 05:44:56.415171 kubelet[2596]: I0508 05:44:56.414523 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2af1a327-7716-4aad-bb55-55682f8973c2-calico-apiserver-certs\") pod \"calico-apiserver-67fd4c9f8d-ncnc7\" (UID: \"2af1a327-7716-4aad-bb55-55682f8973c2\") " pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" May 8 05:44:56.415171 kubelet[2596]: I0508 05:44:56.414558 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h7xq\" (UniqueName: \"kubernetes.io/projected/0c6acdca-0e5b-443d-8401-07d05363600e-kube-api-access-6h7xq\") pod \"calico-apiserver-58bf46f646-r4ddh\" (UID: \"0c6acdca-0e5b-443d-8401-07d05363600e\") " pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" May 8 05:44:56.415171 kubelet[2596]: I0508 05:44:56.414581 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9ht\" (UniqueName: \"kubernetes.io/projected/2eb94ac1-e498-4e48-a950-45810cc88780-kube-api-access-6w9ht\") pod \"calico-apiserver-67fd4c9f8d-2swkl\" (UID: \"2eb94ac1-e498-4e48-a950-45810cc88780\") " pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" May 8 05:44:56.415171 kubelet[2596]: I0508 05:44:56.414602 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c6acdca-0e5b-443d-8401-07d05363600e-calico-apiserver-certs\") pod \"calico-apiserver-58bf46f646-r4ddh\" (UID: \"0c6acdca-0e5b-443d-8401-07d05363600e\") " pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" May 8 05:44:56.415171 kubelet[2596]: I0508 05:44:56.414623 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qclt\" (UniqueName: \"kubernetes.io/projected/2af1a327-7716-4aad-bb55-55682f8973c2-kube-api-access-5qclt\") pod \"calico-apiserver-67fd4c9f8d-ncnc7\" (UID: \"2af1a327-7716-4aad-bb55-55682f8973c2\") " pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" May 8 05:44:56.415358 kubelet[2596]: I0508 05:44:56.414642 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa364b74-657d-49c1-9a18-1f21f741d4df-config-volume\") pod \"coredns-6f6b679f8f-kp8kd\" (UID: \"fa364b74-657d-49c1-9a18-1f21f741d4df\") " pod="kube-system/coredns-6f6b679f8f-kp8kd" May 8 05:44:56.415358 kubelet[2596]: I0508 05:44:56.414662 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92vl\" (UniqueName: \"kubernetes.io/projected/fa364b74-657d-49c1-9a18-1f21f741d4df-kube-api-access-h92vl\") pod \"coredns-6f6b679f8f-kp8kd\" (UID: \"fa364b74-657d-49c1-9a18-1f21f741d4df\") " pod="kube-system/coredns-6f6b679f8f-kp8kd" May 8 05:44:56.415358 kubelet[2596]: I0508 05:44:56.414680 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjhk\" (UniqueName: \"kubernetes.io/projected/d3c40b71-a013-43d8-b8d8-e3eec48008e2-kube-api-access-pnjhk\") pod \"calico-kube-controllers-787966c4fb-2244q\" (UID: \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\") " pod="calico-system/calico-kube-controllers-787966c4fb-2244q" May 8 05:44:56.415358 kubelet[2596]: I0508 05:44:56.414702 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c40b71-a013-43d8-b8d8-e3eec48008e2-tigera-ca-bundle\") pod \"calico-kube-controllers-787966c4fb-2244q\" (UID: \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\") " pod="calico-system/calico-kube-controllers-787966c4fb-2244q" May 8 05:44:56.415358 kubelet[2596]: I0508 05:44:56.414741 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2eb94ac1-e498-4e48-a950-45810cc88780-calico-apiserver-certs\") pod \"calico-apiserver-67fd4c9f8d-2swkl\" (UID: \"2eb94ac1-e498-4e48-a950-45810cc88780\") " pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" May 8 05:44:56.419791 systemd[1]: Created slice kubepods-besteffort-pod2eb94ac1_e498_4e48_a950_45810cc88780.slice - libcontainer container kubepods-besteffort-pod2eb94ac1_e498_4e48_a950_45810cc88780.slice. May 8 05:44:56.462544 containerd[1459]: time="2025-05-08T05:44:56.462025124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9cpg,Uid:8e593293-d978-473a-ae19-5154bba363a6,Namespace:kube-system,Attempt:0,}" May 8 05:44:56.562636 containerd[1459]: time="2025-05-08T05:44:56.559773054Z" level=info msg="shim disconnected" id=96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7 namespace=k8s.io May 8 05:44:56.562636 containerd[1459]: time="2025-05-08T05:44:56.559873092Z" level=warning msg="cleaning up after shim disconnected" id=96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7 namespace=k8s.io May 8 05:44:56.562636 containerd[1459]: time="2025-05-08T05:44:56.559895383Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:44:56.679779 containerd[1459]: time="2025-05-08T05:44:56.679661923Z" level=error msg="Failed to destroy network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.680669 containerd[1459]: time="2025-05-08T05:44:56.680608858Z" level=error msg="encountered an error cleaning up failed sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.680803 containerd[1459]: time="2025-05-08T05:44:56.680661457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9cpg,Uid:8e593293-d978-473a-ae19-5154bba363a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.680863 kubelet[2596]: E0508 05:44:56.680840 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.680909 kubelet[2596]: E0508 05:44:56.680892 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9cpg" May 8 05:44:56.680938 kubelet[2596]: E0508 05:44:56.680914 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9cpg" May 8 05:44:56.680973 kubelet[2596]: E0508 05:44:56.680954 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9cpg_kube-system(8e593293-d978-473a-ae19-5154bba363a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9cpg_kube-system(8e593293-d978-473a-ae19-5154bba363a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9cpg" podUID="8e593293-d978-473a-ae19-5154bba363a6" May 8 05:44:56.696467 containerd[1459]: time="2025-05-08T05:44:56.695536648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kp8kd,Uid:fa364b74-657d-49c1-9a18-1f21f741d4df,Namespace:kube-system,Attempt:0,}" May 8 05:44:56.701608 containerd[1459]: time="2025-05-08T05:44:56.701569573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787966c4fb-2244q,Uid:d3c40b71-a013-43d8-b8d8-e3eec48008e2,Namespace:calico-system,Attempt:0,}" May 8 05:44:56.709097 containerd[1459]: time="2025-05-08T05:44:56.709070442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-r4ddh,Uid:0c6acdca-0e5b-443d-8401-07d05363600e,Namespace:calico-apiserver,Attempt:0,}" May 8 05:44:56.719717 containerd[1459]: time="2025-05-08T05:44:56.719687917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-ncnc7,Uid:2af1a327-7716-4aad-bb55-55682f8973c2,Namespace:calico-apiserver,Attempt:0,}" May 8 05:44:56.726675 containerd[1459]: time="2025-05-08T05:44:56.726559897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-2swkl,Uid:2eb94ac1-e498-4e48-a950-45810cc88780,Namespace:calico-apiserver,Attempt:0,}" May 8 05:44:56.865569 containerd[1459]: time="2025-05-08T05:44:56.865223209Z" level=error msg="Failed to destroy network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.866251 containerd[1459]: time="2025-05-08T05:44:56.866097148Z" level=error msg="encountered an error cleaning up failed sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.866251 containerd[1459]: time="2025-05-08T05:44:56.866152982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787966c4fb-2244q,Uid:d3c40b71-a013-43d8-b8d8-e3eec48008e2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.866376 kubelet[2596]: E0508 05:44:56.866333 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.866448 kubelet[2596]: E0508 05:44:56.866393 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-787966c4fb-2244q" May 8 05:44:56.866448 kubelet[2596]: E0508 05:44:56.866416 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-787966c4fb-2244q" May 8 05:44:56.866828 kubelet[2596]: E0508 05:44:56.866478 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-787966c4fb-2244q_calico-system(d3c40b71-a013-43d8-b8d8-e3eec48008e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-787966c4fb-2244q_calico-system(d3c40b71-a013-43d8-b8d8-e3eec48008e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-787966c4fb-2244q" podUID="d3c40b71-a013-43d8-b8d8-e3eec48008e2" May 8 05:44:56.871239 containerd[1459]: time="2025-05-08T05:44:56.871125419Z" level=error msg="Failed to destroy network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.871960 containerd[1459]: time="2025-05-08T05:44:56.871846822Z" level=error msg="encountered an error cleaning up failed sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.871960 containerd[1459]: time="2025-05-08T05:44:56.871914399Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kp8kd,Uid:fa364b74-657d-49c1-9a18-1f21f741d4df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.873458 kubelet[2596]: E0508 05:44:56.872473 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.873458 kubelet[2596]: E0508 05:44:56.872524 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kp8kd" May 8 05:44:56.873458 kubelet[2596]: E0508 05:44:56.872557 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-kp8kd" May 8 05:44:56.873600 kubelet[2596]: E0508 05:44:56.872600 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-kp8kd_kube-system(fa364b74-657d-49c1-9a18-1f21f741d4df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-kp8kd_kube-system(fa364b74-657d-49c1-9a18-1f21f741d4df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kp8kd" podUID="fa364b74-657d-49c1-9a18-1f21f741d4df" May 8 05:44:56.894299 containerd[1459]: time="2025-05-08T05:44:56.894253138Z" level=error msg="Failed to destroy network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.894758 containerd[1459]: time="2025-05-08T05:44:56.894570493Z" level=error msg="encountered an error cleaning up failed sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.894758 containerd[1459]: time="2025-05-08T05:44:56.894622220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-r4ddh,Uid:0c6acdca-0e5b-443d-8401-07d05363600e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.895041 kubelet[2596]: E0508 05:44:56.894817 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.895041 kubelet[2596]: E0508 05:44:56.894872 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" May 8 05:44:56.895041 kubelet[2596]: E0508 05:44:56.894892 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" May 8 05:44:56.895391 kubelet[2596]: E0508 05:44:56.894939 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58bf46f646-r4ddh_calico-apiserver(0c6acdca-0e5b-443d-8401-07d05363600e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58bf46f646-r4ddh_calico-apiserver(0c6acdca-0e5b-443d-8401-07d05363600e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" podUID="0c6acdca-0e5b-443d-8401-07d05363600e" May 8 05:44:56.904161 containerd[1459]: time="2025-05-08T05:44:56.904100299Z" level=error msg="Failed to destroy network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.904518 containerd[1459]: time="2025-05-08T05:44:56.904427602Z" level=error msg="encountered an error cleaning up failed sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.904592 containerd[1459]: time="2025-05-08T05:44:56.904545103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-ncnc7,Uid:2af1a327-7716-4aad-bb55-55682f8973c2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.905528 kubelet[2596]: E0508 05:44:56.904744 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.905528 kubelet[2596]: E0508 05:44:56.904815 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" May 8 05:44:56.905528 kubelet[2596]: E0508 05:44:56.904838 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" May 8 05:44:56.905630 kubelet[2596]: E0508 05:44:56.904890 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fd4c9f8d-ncnc7_calico-apiserver(2af1a327-7716-4aad-bb55-55682f8973c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fd4c9f8d-ncnc7_calico-apiserver(2af1a327-7716-4aad-bb55-55682f8973c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" podUID="2af1a327-7716-4aad-bb55-55682f8973c2" May 8 05:44:56.915626 systemd[1]: Created slice kubepods-besteffort-pod71d8c7d2_10e7_4c65_9044_49340af78942.slice - libcontainer container kubepods-besteffort-pod71d8c7d2_10e7_4c65_9044_49340af78942.slice. May 8 05:44:56.918495 containerd[1459]: time="2025-05-08T05:44:56.918162614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dxgvc,Uid:71d8c7d2-10e7-4c65-9044-49340af78942,Namespace:calico-system,Attempt:0,}" May 8 05:44:56.926553 containerd[1459]: time="2025-05-08T05:44:56.926506003Z" level=error msg="Failed to destroy network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.926862 containerd[1459]: time="2025-05-08T05:44:56.926820934Z" level=error msg="encountered an error cleaning up failed sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.926910 containerd[1459]: time="2025-05-08T05:44:56.926888671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-2swkl,Uid:2eb94ac1-e498-4e48-a950-45810cc88780,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.928119 kubelet[2596]: E0508 05:44:56.927088 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.928119 kubelet[2596]: E0508 05:44:56.927154 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" May 8 05:44:56.928119 kubelet[2596]: E0508 05:44:56.927176 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" May 8 05:44:56.928229 kubelet[2596]: E0508 05:44:56.927236 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fd4c9f8d-2swkl_calico-apiserver(2eb94ac1-e498-4e48-a950-45810cc88780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fd4c9f8d-2swkl_calico-apiserver(2eb94ac1-e498-4e48-a950-45810cc88780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" podUID="2eb94ac1-e498-4e48-a950-45810cc88780" May 8 05:44:56.985275 containerd[1459]: time="2025-05-08T05:44:56.983899763Z" level=error msg="Failed to destroy network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.985275 containerd[1459]: time="2025-05-08T05:44:56.984880523Z" level=error msg="encountered an error cleaning up failed sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.985275 containerd[1459]: time="2025-05-08T05:44:56.984959571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dxgvc,Uid:71d8c7d2-10e7-4c65-9044-49340af78942,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.985532 kubelet[2596]: E0508 05:44:56.985172 2596 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:56.985532 kubelet[2596]: E0508 05:44:56.985227 2596 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:56.985532 kubelet[2596]: E0508 05:44:56.985249 2596 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dxgvc" May 8 05:44:56.986322 kubelet[2596]: E0508 05:44:56.985289 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dxgvc_calico-system(71d8c7d2-10e7-4c65-9044-49340af78942)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dxgvc_calico-system(71d8c7d2-10e7-4c65-9044-49340af78942)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:57.064381 kubelet[2596]: I0508 05:44:57.063690 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:44:57.066517 containerd[1459]: time="2025-05-08T05:44:57.065627635Z" level=info msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" May 8 05:44:57.066517 containerd[1459]: time="2025-05-08T05:44:57.065986969Z" level=info msg="Ensure that sandbox 5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f in task-service has been cleanup successfully" May 8 05:44:57.077900 containerd[1459]: time="2025-05-08T05:44:57.077812520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 05:44:57.083016 kubelet[2596]: I0508 05:44:57.082827 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:44:57.089200 containerd[1459]: time="2025-05-08T05:44:57.089110410Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:44:57.089697 containerd[1459]: time="2025-05-08T05:44:57.089524867Z" level=info msg="Ensure that sandbox 9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be in task-service has been cleanup successfully" May 8 05:44:57.095529 kubelet[2596]: I0508 05:44:57.095100 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:44:57.101946 containerd[1459]: time="2025-05-08T05:44:57.101582974Z" level=info msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" May 8 05:44:57.102212 containerd[1459]: time="2025-05-08T05:44:57.102151591Z" level=info msg="Ensure that sandbox c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca in task-service has been cleanup successfully" May 8 05:44:57.109969 kubelet[2596]: I0508 05:44:57.109155 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:44:57.112385 containerd[1459]: time="2025-05-08T05:44:57.112305996Z" level=info msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" May 8 05:44:57.115026 containerd[1459]: time="2025-05-08T05:44:57.114969913Z" level=info msg="Ensure that sandbox c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042 in task-service has been cleanup successfully" May 8 05:44:57.126187 kubelet[2596]: I0508 05:44:57.125628 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:44:57.127000 containerd[1459]: time="2025-05-08T05:44:57.126971012Z" level=info msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" May 8 05:44:57.127255 containerd[1459]: time="2025-05-08T05:44:57.127236270Z" level=info msg="Ensure that sandbox 463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593 in task-service has been cleanup successfully" May 8 05:44:57.133611 kubelet[2596]: I0508 05:44:57.133432 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:44:57.140099 containerd[1459]: time="2025-05-08T05:44:57.139889122Z" level=info msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" May 8 05:44:57.143161 containerd[1459]: time="2025-05-08T05:44:57.142986712Z" level=info msg="Ensure that sandbox f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781 in task-service has been cleanup successfully" May 8 05:44:57.143863 kubelet[2596]: I0508 05:44:57.143457 2596 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:44:57.145528 containerd[1459]: time="2025-05-08T05:44:57.145057396Z" level=info msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" May 8 05:44:57.148102 containerd[1459]: time="2025-05-08T05:44:57.148036003Z" level=info msg="Ensure that sandbox b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e in task-service has been cleanup successfully" May 8 05:44:57.193340 containerd[1459]: time="2025-05-08T05:44:57.193292918Z" level=error msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" failed" error="failed to destroy network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.193859 kubelet[2596]: E0508 05:44:57.193678 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:44:57.193859 kubelet[2596]: E0508 05:44:57.193736 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be"} May 8 05:44:57.193859 kubelet[2596]: E0508 05:44:57.193797 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2eb94ac1-e498-4e48-a950-45810cc88780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.193859 kubelet[2596]: E0508 05:44:57.193827 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2eb94ac1-e498-4e48-a950-45810cc88780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" podUID="2eb94ac1-e498-4e48-a950-45810cc88780" May 8 05:44:57.200607 containerd[1459]: time="2025-05-08T05:44:57.200328835Z" level=error msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" failed" error="failed to destroy network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.201169 kubelet[2596]: E0508 05:44:57.200598 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:44:57.201169 kubelet[2596]: E0508 05:44:57.200710 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781"} May 8 05:44:57.201169 kubelet[2596]: E0508 05:44:57.200866 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.201169 kubelet[2596]: E0508 05:44:57.200894 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-787966c4fb-2244q" podUID="d3c40b71-a013-43d8-b8d8-e3eec48008e2" May 8 05:44:57.226814 containerd[1459]: time="2025-05-08T05:44:57.226753097Z" level=error msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" failed" error="failed to destroy network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.227125 kubelet[2596]: E0508 05:44:57.227045 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:44:57.227670 kubelet[2596]: E0508 05:44:57.227464 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f"} May 8 05:44:57.227670 kubelet[2596]: E0508 05:44:57.227542 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2af1a327-7716-4aad-bb55-55682f8973c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.227670 kubelet[2596]: E0508 05:44:57.227569 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2af1a327-7716-4aad-bb55-55682f8973c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" podUID="2af1a327-7716-4aad-bb55-55682f8973c2" May 8 05:44:57.234104 containerd[1459]: time="2025-05-08T05:44:57.234061915Z" level=error msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" failed" error="failed to destroy network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.234503 kubelet[2596]: E0508 05:44:57.234259 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:44:57.234503 kubelet[2596]: E0508 05:44:57.234298 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042"} May 8 05:44:57.234503 kubelet[2596]: E0508 05:44:57.234337 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71d8c7d2-10e7-4c65-9044-49340af78942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.234503 kubelet[2596]: E0508 05:44:57.234362 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71d8c7d2-10e7-4c65-9044-49340af78942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dxgvc" podUID="71d8c7d2-10e7-4c65-9044-49340af78942" May 8 05:44:57.236013 containerd[1459]: time="2025-05-08T05:44:57.235695059Z" level=error msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" failed" error="failed to destroy network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.236062 kubelet[2596]: E0508 05:44:57.235874 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:44:57.236062 kubelet[2596]: E0508 05:44:57.235904 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca"} May 8 05:44:57.236062 kubelet[2596]: E0508 05:44:57.235928 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e593293-d978-473a-ae19-5154bba363a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.236062 kubelet[2596]: E0508 05:44:57.235948 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e593293-d978-473a-ae19-5154bba363a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9cpg" podUID="8e593293-d978-473a-ae19-5154bba363a6" May 8 05:44:57.236980 containerd[1459]: time="2025-05-08T05:44:57.236709292Z" level=error msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" failed" error="failed to destroy network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.237156 kubelet[2596]: E0508 05:44:57.237048 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:44:57.237156 kubelet[2596]: E0508 05:44:57.237078 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593"} May 8 05:44:57.237156 kubelet[2596]: E0508 05:44:57.237103 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c6acdca-0e5b-443d-8401-07d05363600e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.237156 kubelet[2596]: E0508 05:44:57.237135 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c6acdca-0e5b-443d-8401-07d05363600e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" podUID="0c6acdca-0e5b-443d-8401-07d05363600e" May 8 05:44:57.245410 containerd[1459]: time="2025-05-08T05:44:57.245361540Z" level=error msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" failed" error="failed to destroy network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:44:57.245623 kubelet[2596]: E0508 05:44:57.245588 2596 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:44:57.245672 kubelet[2596]: E0508 05:44:57.245637 2596 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e"} May 8 05:44:57.245712 kubelet[2596]: E0508 05:44:57.245674 2596 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa364b74-657d-49c1-9a18-1f21f741d4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:44:57.245772 kubelet[2596]: E0508 05:44:57.245725 2596 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa364b74-657d-49c1-9a18-1f21f741d4df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-kp8kd" podUID="fa364b74-657d-49c1-9a18-1f21f741d4df" May 8 05:44:57.698221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593-shm.mount: Deactivated successfully. May 8 05:44:57.698433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e-shm.mount: Deactivated successfully. May 8 05:44:57.698644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781-shm.mount: Deactivated successfully. May 8 05:45:05.574168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739293282.mount: Deactivated successfully. May 8 05:45:05.628345 containerd[1459]: time="2025-05-08T05:45:05.628269957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:05.629813 containerd[1459]: time="2025-05-08T05:45:05.629534418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 05:45:05.630836 containerd[1459]: time="2025-05-08T05:45:05.630755573Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:05.633659 containerd[1459]: time="2025-05-08T05:45:05.633606828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:05.634268 containerd[1459]: time="2025-05-08T05:45:05.634225702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.556342138s" May 8 05:45:05.634332 containerd[1459]: time="2025-05-08T05:45:05.634267134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 05:45:05.642742 containerd[1459]: time="2025-05-08T05:45:05.642694828Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 05:45:05.672233 containerd[1459]: time="2025-05-08T05:45:05.672098675Z" level=info msg="CreateContainer within sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\"" May 8 05:45:05.673750 containerd[1459]: time="2025-05-08T05:45:05.673716200Z" level=info msg="StartContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\"" May 8 05:45:05.704576 systemd[1]: Started cri-containerd-540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a.scope - libcontainer container 540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a. May 8 05:45:05.735788 containerd[1459]: time="2025-05-08T05:45:05.735689447Z" level=info msg="StartContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" returns successfully" May 8 05:45:05.812717 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 05:45:05.812833 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 05:45:07.048851 systemd[1]: run-containerd-runc-k8s.io-540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a-runc.9TjxXy.mount: Deactivated successfully. May 8 05:45:07.518489 kernel: bpftool[3998]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 05:45:07.861177 systemd-networkd[1373]: vxlan.calico: Link UP May 8 05:45:07.861186 systemd-networkd[1373]: vxlan.calico: Gained carrier May 8 05:45:07.912643 containerd[1459]: time="2025-05-08T05:45:07.911612773Z" level=info msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" May 8 05:45:08.018609 kubelet[2596]: I0508 05:45:08.018561 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4phqr" podStartSLOduration=3.472926047 podStartE2EDuration="28.018545182s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:44:41.089812982 +0000 UTC m=+13.366414195" lastFinishedPulling="2025-05-08 05:45:05.635432117 +0000 UTC m=+37.912033330" observedRunningTime="2025-05-08 05:45:06.225469972 +0000 UTC m=+38.502071235" watchObservedRunningTime="2025-05-08 05:45:08.018545182 +0000 UTC m=+40.295146395" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.018 [INFO][4047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.019 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" iface="eth0" netns="/var/run/netns/cni-c39b06bc-c8a5-68f3-9c8b-316afbcc9548" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.019 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" iface="eth0" netns="/var/run/netns/cni-c39b06bc-c8a5-68f3-9c8b-316afbcc9548" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.020 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" iface="eth0" netns="/var/run/netns/cni-c39b06bc-c8a5-68f3-9c8b-316afbcc9548" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.020 [INFO][4047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.020 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.049 [INFO][4055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.049 [INFO][4055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.049 [INFO][4055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.059 [WARNING][4055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.059 [INFO][4055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.060 [INFO][4055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:08.064546 containerd[1459]: 2025-05-08 05:45:08.063 [INFO][4047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:08.065031 containerd[1459]: time="2025-05-08T05:45:08.064683559Z" level=info msg="TearDown network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" successfully" May 8 05:45:08.065031 containerd[1459]: time="2025-05-08T05:45:08.064725483Z" level=info msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" returns successfully" May 8 05:45:08.067563 containerd[1459]: time="2025-05-08T05:45:08.066636044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-ncnc7,Uid:2af1a327-7716-4aad-bb55-55682f8973c2,Namespace:calico-apiserver,Attempt:1,}" May 8 05:45:08.068423 systemd[1]: run-netns-cni\x2dc39b06bc\x2dc8a5\x2d68f3\x2d9c8b\x2d316afbcc9548.mount: Deactivated successfully. May 8 05:45:08.632343 systemd-networkd[1373]: cali4e42d561552: Link UP May 8 05:45:08.633269 systemd-networkd[1373]: cali4e42d561552: Gained carrier May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.255 [INFO][4062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0 calico-apiserver-67fd4c9f8d- calico-apiserver 2af1a327-7716-4aad-bb55-55682f8973c2 799 0 2025-05-08 05:44:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fd4c9f8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-apiserver-67fd4c9f8d-ncnc7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e42d561552 [] []}} ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.255 [INFO][4062] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.488 [INFO][4109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.505 [INFO][4109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b97d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-apiserver-67fd4c9f8d-ncnc7", "timestamp":"2025-05-08 05:45:08.488690294 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.505 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.505 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.505 [INFO][4109] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.508 [INFO][4109] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.514 [INFO][4109] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.521 [INFO][4109] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.524 [INFO][4109] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.527 [INFO][4109] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.527 [INFO][4109] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.529 [INFO][4109] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94 May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.585 [INFO][4109] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.598 [INFO][4109] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.598 [INFO][4109] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.598 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:08.799388 containerd[1459]: 2025-05-08 05:45:08.598 [INFO][4109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.601 [INFO][4062] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2af1a327-7716-4aad-bb55-55682f8973c2", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-apiserver-67fd4c9f8d-ncnc7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e42d561552", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.601 [INFO][4062] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.129/32] ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.601 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e42d561552 ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.661 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.661 [INFO][4062] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2af1a327-7716-4aad-bb55-55682f8973c2", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94", Pod:"calico-apiserver-67fd4c9f8d-ncnc7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e42d561552", MAC:"62:b3:25:61:69:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:08.810370 containerd[1459]: 2025-05-08 05:45:08.792 [INFO][4062] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-ncnc7" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:08.910158 containerd[1459]: time="2025-05-08T05:45:08.910028013Z" level=info msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" May 8 05:45:08.921075 containerd[1459]: time="2025-05-08T05:45:08.921044724Z" level=info msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" May 8 05:45:09.262729 containerd[1459]: time="2025-05-08T05:45:09.261304703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:09.262729 containerd[1459]: time="2025-05-08T05:45:09.262649056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:09.262729 containerd[1459]: time="2025-05-08T05:45:09.262712191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.263249 containerd[1459]: time="2025-05-08T05:45:09.262806538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.296729 systemd[1]: Started cri-containerd-e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94.scope - libcontainer container e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94. May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.204 [INFO][4147] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.206 [INFO][4147] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" iface="eth0" netns="/var/run/netns/cni-b76de6e1-27e6-9bb4-e98a-530082f3075c" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.207 [INFO][4147] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" iface="eth0" netns="/var/run/netns/cni-b76de6e1-27e6-9bb4-e98a-530082f3075c" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.207 [INFO][4147] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" iface="eth0" netns="/var/run/netns/cni-b76de6e1-27e6-9bb4-e98a-530082f3075c" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.208 [INFO][4147] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.208 [INFO][4147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.273 [INFO][4169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.274 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.274 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.287 [WARNING][4169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.287 [INFO][4169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.292 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:09.298080 containerd[1459]: 2025-05-08 05:45:09.294 [INFO][4147] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:09.302819 containerd[1459]: time="2025-05-08T05:45:09.302597582Z" level=info msg="TearDown network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" successfully" May 8 05:45:09.302931 containerd[1459]: time="2025-05-08T05:45:09.302913529Z" level=info msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" returns successfully" May 8 05:45:09.303881 systemd[1]: run-netns-cni\x2db76de6e1\x2d27e6\x2d9bb4\x2de98a\x2d530082f3075c.mount: Deactivated successfully. May 8 05:45:09.305656 containerd[1459]: time="2025-05-08T05:45:09.305591435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kp8kd,Uid:fa364b74-657d-49c1-9a18-1f21f741d4df,Namespace:kube-system,Attempt:1,}" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.202 [INFO][4158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.203 [INFO][4158] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" iface="eth0" netns="/var/run/netns/cni-faf2b083-2b2b-a857-d235-51a2ce1ccb05" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.205 [INFO][4158] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" iface="eth0" netns="/var/run/netns/cni-faf2b083-2b2b-a857-d235-51a2ce1ccb05" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.205 [INFO][4158] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" iface="eth0" netns="/var/run/netns/cni-faf2b083-2b2b-a857-d235-51a2ce1ccb05" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.205 [INFO][4158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.205 [INFO][4158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.280 [INFO][4167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.280 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.292 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.309 [WARNING][4167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.309 [INFO][4167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.312 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:09.316516 containerd[1459]: 2025-05-08 05:45:09.313 [INFO][4158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:09.317460 containerd[1459]: time="2025-05-08T05:45:09.317018006Z" level=info msg="TearDown network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" successfully" May 8 05:45:09.317460 containerd[1459]: time="2025-05-08T05:45:09.317047194Z" level=info msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" returns successfully" May 8 05:45:09.317969 containerd[1459]: time="2025-05-08T05:45:09.317751048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dxgvc,Uid:71d8c7d2-10e7-4c65-9044-49340af78942,Namespace:calico-system,Attempt:1,}" May 8 05:45:09.318212 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL May 8 05:45:09.324308 systemd[1]: run-netns-cni\x2dfaf2b083\x2d2b2b\x2da857\x2dd235\x2d51a2ce1ccb05.mount: Deactivated successfully. May 8 05:45:09.408027 containerd[1459]: time="2025-05-08T05:45:09.407980203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-ncnc7,Uid:2af1a327-7716-4aad-bb55-55682f8973c2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\"" May 8 05:45:09.410254 containerd[1459]: time="2025-05-08T05:45:09.410091577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 05:45:09.491770 systemd-networkd[1373]: caliecfd0431b13: Link UP May 8 05:45:09.492279 systemd-networkd[1373]: caliecfd0431b13: Gained carrier May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.396 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0 coredns-6f6b679f8f- kube-system fa364b74-657d-49c1-9a18-1f21f741d4df 808 0 2025-05-08 05:44:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal coredns-6f6b679f8f-kp8kd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliecfd0431b13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.397 [INFO][4216] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.438 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" HandleID="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.449 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" HandleID="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031ab60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"coredns-6f6b679f8f-kp8kd", "timestamp":"2025-05-08 05:45:09.438886282 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.449 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.449 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.449 [INFO][4244] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.452 [INFO][4244] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.456 [INFO][4244] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.465 [INFO][4244] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.467 [INFO][4244] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.469 [INFO][4244] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.471 [INFO][4244] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.473 [INFO][4244] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50 May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.478 [INFO][4244] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4244] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4244] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:09.507973 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" HandleID="k8s-pod-network.3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.487 [INFO][4216] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fa364b74-657d-49c1-9a18-1f21f741d4df", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-kp8kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecfd0431b13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.487 [INFO][4216] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.130/32] ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.487 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecfd0431b13 ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.489 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.490 [INFO][4216] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fa364b74-657d-49c1-9a18-1f21f741d4df", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50", Pod:"coredns-6f6b679f8f-kp8kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecfd0431b13", MAC:"4e:3c:18:fa:ee:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:09.509041 containerd[1459]: 2025-05-08 05:45:09.505 [INFO][4216] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50" Namespace="kube-system" Pod="coredns-6f6b679f8f-kp8kd" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:09.536102 containerd[1459]: time="2025-05-08T05:45:09.535277678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:09.536229 containerd[1459]: time="2025-05-08T05:45:09.535966303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:09.536229 containerd[1459]: time="2025-05-08T05:45:09.535988677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.536229 containerd[1459]: time="2025-05-08T05:45:09.536067734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.553627 systemd[1]: Started cri-containerd-3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50.scope - libcontainer container 3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50. May 8 05:45:09.605926 systemd-networkd[1373]: calif143a07a0c2: Link UP May 8 05:45:09.607263 systemd-networkd[1373]: calif143a07a0c2: Gained carrier May 8 05:45:09.609218 containerd[1459]: time="2025-05-08T05:45:09.609173209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kp8kd,Uid:fa364b74-657d-49c1-9a18-1f21f741d4df,Namespace:kube-system,Attempt:1,} returns sandbox id \"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50\"" May 8 05:45:09.617163 containerd[1459]: time="2025-05-08T05:45:09.617013234Z" level=info msg="CreateContainer within sandbox \"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.417 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0 csi-node-driver- calico-system 71d8c7d2-10e7-4c65-9044-49340af78942 807 0 2025-05-08 05:44:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal csi-node-driver-dxgvc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif143a07a0c2 [] []}} ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.418 [INFO][4227] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.458 [INFO][4253] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" HandleID="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.468 [INFO][4253] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" HandleID="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"csi-node-driver-dxgvc", "timestamp":"2025-05-08 05:45:09.458581959 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.468 [INFO][4253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.485 [INFO][4253] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.554 [INFO][4253] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.560 [INFO][4253] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.566 [INFO][4253] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.568 [INFO][4253] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.572 [INFO][4253] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.572 [INFO][4253] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.577 [INFO][4253] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121 May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.584 [INFO][4253] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.597 [INFO][4253] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.597 [INFO][4253] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.597 [INFO][4253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:09.628231 containerd[1459]: 2025-05-08 05:45:09.598 [INFO][4253] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" HandleID="k8s-pod-network.9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.601 [INFO][4227] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71d8c7d2-10e7-4c65-9044-49340af78942", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"csi-node-driver-dxgvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif143a07a0c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.601 [INFO][4227] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.131/32] ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.601 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif143a07a0c2 ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.606 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.606 [INFO][4227] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71d8c7d2-10e7-4c65-9044-49340af78942", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121", Pod:"csi-node-driver-dxgvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif143a07a0c2", MAC:"92:41:b8:9a:8a:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:09.629629 containerd[1459]: 2025-05-08 05:45:09.624 [INFO][4227] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121" Namespace="calico-system" Pod="csi-node-driver-dxgvc" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:09.650982 containerd[1459]: time="2025-05-08T05:45:09.650824219Z" level=info msg="CreateContainer within sandbox \"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14bbf9573c1b8f8d1e3e9269c46535ebb0d2f397516396a3874219fa782c2e2f\"" May 8 05:45:09.652805 containerd[1459]: time="2025-05-08T05:45:09.651925752Z" level=info msg="StartContainer for \"14bbf9573c1b8f8d1e3e9269c46535ebb0d2f397516396a3874219fa782c2e2f\"" May 8 05:45:09.665932 containerd[1459]: time="2025-05-08T05:45:09.665545278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:09.665932 containerd[1459]: time="2025-05-08T05:45:09.665598283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:09.665932 containerd[1459]: time="2025-05-08T05:45:09.665628001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.667694 containerd[1459]: time="2025-05-08T05:45:09.667633175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:09.695593 systemd[1]: Started cri-containerd-9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121.scope - libcontainer container 9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121. May 8 05:45:09.699921 systemd[1]: Started cri-containerd-14bbf9573c1b8f8d1e3e9269c46535ebb0d2f397516396a3874219fa782c2e2f.scope - libcontainer container 14bbf9573c1b8f8d1e3e9269c46535ebb0d2f397516396a3874219fa782c2e2f. May 8 05:45:09.730860 containerd[1459]: time="2025-05-08T05:45:09.730688597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dxgvc,Uid:71d8c7d2-10e7-4c65-9044-49340af78942,Namespace:calico-system,Attempt:1,} returns sandbox id \"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121\"" May 8 05:45:09.745654 containerd[1459]: time="2025-05-08T05:45:09.745563150Z" level=info msg="StartContainer for \"14bbf9573c1b8f8d1e3e9269c46535ebb0d2f397516396a3874219fa782c2e2f\" returns successfully" May 8 05:45:09.910866 containerd[1459]: time="2025-05-08T05:45:09.909773957Z" level=info msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" May 8 05:45:09.913935 containerd[1459]: time="2025-05-08T05:45:09.911672729Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.005 [INFO][4435] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.006 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="/var/run/netns/cni-c1e5e69f-c979-c03d-f981-f269287fd0f7" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.006 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="/var/run/netns/cni-c1e5e69f-c979-c03d-f981-f269287fd0f7" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.006 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="/var/run/netns/cni-c1e5e69f-c979-c03d-f981-f269287fd0f7" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.006 [INFO][4435] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.006 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.042 [INFO][4449] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.043 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.043 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.049 [WARNING][4449] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.049 [INFO][4449] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.051 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:10.055970 containerd[1459]: 2025-05-08 05:45:10.052 [INFO][4435] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:10.056843 containerd[1459]: time="2025-05-08T05:45:10.056791698Z" level=info msg="TearDown network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" successfully" May 8 05:45:10.056843 containerd[1459]: time="2025-05-08T05:45:10.056820125Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" returns successfully" May 8 05:45:10.058092 containerd[1459]: time="2025-05-08T05:45:10.058043845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-2swkl,Uid:2eb94ac1-e498-4e48-a950-45810cc88780,Namespace:calico-apiserver,Attempt:1,}" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.015 [INFO][4436] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.017 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" iface="eth0" netns="/var/run/netns/cni-d25a9075-08bb-90e8-e119-ebeba5c445fa" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.017 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" iface="eth0" netns="/var/run/netns/cni-d25a9075-08bb-90e8-e119-ebeba5c445fa" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.019 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" iface="eth0" netns="/var/run/netns/cni-d25a9075-08bb-90e8-e119-ebeba5c445fa" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.019 [INFO][4436] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.019 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.058 [INFO][4454] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.058 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.059 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.068 [WARNING][4454] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.068 [INFO][4454] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.070 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:10.073908 containerd[1459]: 2025-05-08 05:45:10.072 [INFO][4436] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:10.074474 containerd[1459]: time="2025-05-08T05:45:10.074045807Z" level=info msg="TearDown network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" successfully" May 8 05:45:10.074474 containerd[1459]: time="2025-05-08T05:45:10.074070125Z" level=info msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" returns successfully" May 8 05:45:10.074811 containerd[1459]: time="2025-05-08T05:45:10.074781603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-r4ddh,Uid:0c6acdca-0e5b-443d-8401-07d05363600e,Namespace:calico-apiserver,Attempt:1,}" May 8 05:45:10.213647 systemd-networkd[1373]: cali4e42d561552: Gained IPv6LL May 8 05:45:10.241548 systemd-networkd[1373]: cali128c03178f2: Link UP May 8 05:45:10.242852 systemd-networkd[1373]: cali128c03178f2: Gained carrier May 8 05:45:10.257935 kubelet[2596]: I0508 05:45:10.257774 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kp8kd" podStartSLOduration=36.257649463999996 podStartE2EDuration="36.257649464s" podCreationTimestamp="2025-05-08 05:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:10.227278232 +0000 UTC m=+42.503879445" watchObservedRunningTime="2025-05-08 05:45:10.257649464 +0000 UTC m=+42.534250677" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.123 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0 calico-apiserver-67fd4c9f8d- calico-apiserver 2eb94ac1-e498-4e48-a950-45810cc88780 824 0 2025-05-08 05:44:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fd4c9f8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-apiserver-67fd4c9f8d-2swkl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali128c03178f2 [] []}} ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.123 [INFO][4463] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.167 [INFO][4487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.183 [INFO][4487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-apiserver-67fd4c9f8d-2swkl", "timestamp":"2025-05-08 05:45:10.167242315 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.183 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.183 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.183 [INFO][4487] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.186 [INFO][4487] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.190 [INFO][4487] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.203 [INFO][4487] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.207 [INFO][4487] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.212 [INFO][4487] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.212 [INFO][4487] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.214 [INFO][4487] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2 May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.221 [INFO][4487] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.234 [INFO][4487] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.234 [INFO][4487] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.234 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:10.264535 containerd[1459]: 2025-05-08 05:45:10.234 [INFO][4487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.237 [INFO][4463] cni-plugin/k8s.go 386: Populated endpoint ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2eb94ac1-e498-4e48-a950-45810cc88780", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-apiserver-67fd4c9f8d-2swkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128c03178f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.237 [INFO][4463] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.132/32] ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.237 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali128c03178f2 ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.242 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.245 [INFO][4463] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2eb94ac1-e498-4e48-a950-45810cc88780", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2", Pod:"calico-apiserver-67fd4c9f8d-2swkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128c03178f2", MAC:"72:88:05:11:3e:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:10.265287 containerd[1459]: 2025-05-08 05:45:10.262 [INFO][4463] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Namespace="calico-apiserver" Pod="calico-apiserver-67fd4c9f8d-2swkl" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:10.281123 systemd[1]: run-netns-cni\x2dc1e5e69f\x2dc979\x2dc03d\x2df981\x2df269287fd0f7.mount: Deactivated successfully. May 8 05:45:10.281595 systemd[1]: run-netns-cni\x2dd25a9075\x2d08bb\x2d90e8\x2de119\x2debeba5c445fa.mount: Deactivated successfully. May 8 05:45:10.320744 containerd[1459]: time="2025-05-08T05:45:10.320351516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:10.320744 containerd[1459]: time="2025-05-08T05:45:10.320398248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:10.320744 containerd[1459]: time="2025-05-08T05:45:10.320411173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:10.320744 containerd[1459]: time="2025-05-08T05:45:10.320505971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:10.358292 systemd-networkd[1373]: cali38de17fc75c: Link UP May 8 05:45:10.358935 systemd[1]: Started cri-containerd-da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2.scope - libcontainer container da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2. May 8 05:45:10.359121 systemd-networkd[1373]: cali38de17fc75c: Gained carrier May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.143 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0 calico-apiserver-58bf46f646- calico-apiserver 0c6acdca-0e5b-443d-8401-07d05363600e 825 0 2025-05-08 05:44:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bf46f646 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-apiserver-58bf46f646-r4ddh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38de17fc75c [] []}} ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.143 [INFO][4472] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.189 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" HandleID="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.206 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" HandleID="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-apiserver-58bf46f646-r4ddh", "timestamp":"2025-05-08 05:45:10.189414054 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.206 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.235 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.235 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.287 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.307 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.314 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.316 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.318 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.318 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.321 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.333 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.346 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.346 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.346 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:10.381679 containerd[1459]: 2025-05-08 05:45:10.346 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" HandleID="k8s-pod-network.cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.351 [INFO][4472] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c6acdca-0e5b-443d-8401-07d05363600e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-apiserver-58bf46f646-r4ddh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38de17fc75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.351 [INFO][4472] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.133/32] ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.351 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38de17fc75c ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.359 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.362 [INFO][4472] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c6acdca-0e5b-443d-8401-07d05363600e", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c", Pod:"calico-apiserver-58bf46f646-r4ddh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38de17fc75c", MAC:"de:b1:f7:68:f1:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:10.382292 containerd[1459]: 2025-05-08 05:45:10.378 [INFO][4472] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-r4ddh" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:10.410543 containerd[1459]: time="2025-05-08T05:45:10.410480033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:10.410699 containerd[1459]: time="2025-05-08T05:45:10.410675581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:10.411146 containerd[1459]: time="2025-05-08T05:45:10.411119248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:10.411501 containerd[1459]: time="2025-05-08T05:45:10.411475292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:10.443202 containerd[1459]: time="2025-05-08T05:45:10.443172018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fd4c9f8d-2swkl,Uid:2eb94ac1-e498-4e48-a950-45810cc88780,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\"" May 8 05:45:10.445611 systemd[1]: Started cri-containerd-cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c.scope - libcontainer container cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c. May 8 05:45:10.486213 containerd[1459]: time="2025-05-08T05:45:10.486064294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-r4ddh,Uid:0c6acdca-0e5b-443d-8401-07d05363600e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c\"" May 8 05:45:10.726390 systemd-networkd[1373]: caliecfd0431b13: Gained IPv6LL May 8 05:45:10.909913 containerd[1459]: time="2025-05-08T05:45:10.909709285Z" level=info msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" May 8 05:45:10.920303 containerd[1459]: time="2025-05-08T05:45:10.919913936Z" level=info msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.992 [INFO][4644] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.992 [INFO][4644] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" iface="eth0" netns="/var/run/netns/cni-0abcd637-bfc7-586c-85f2-45d3d82a92f7" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.993 [INFO][4644] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" iface="eth0" netns="/var/run/netns/cni-0abcd637-bfc7-586c-85f2-45d3d82a92f7" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.993 [INFO][4644] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" iface="eth0" netns="/var/run/netns/cni-0abcd637-bfc7-586c-85f2-45d3d82a92f7" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.993 [INFO][4644] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:10.993 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.015 [INFO][4656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.015 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.015 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.024 [WARNING][4656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.024 [INFO][4656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.026 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:11.029130 containerd[1459]: 2025-05-08 05:45:11.027 [INFO][4644] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:11.030793 containerd[1459]: time="2025-05-08T05:45:11.029291181Z" level=info msg="TearDown network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" successfully" May 8 05:45:11.030793 containerd[1459]: time="2025-05-08T05:45:11.029332753Z" level=info msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" returns successfully" May 8 05:45:11.030793 containerd[1459]: time="2025-05-08T05:45:11.030144177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9cpg,Uid:8e593293-d978-473a-ae19-5154bba363a6,Namespace:kube-system,Attempt:1,}" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.987 [INFO][4636] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.987 [INFO][4636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" iface="eth0" netns="/var/run/netns/cni-84e36a05-cdf2-da66-3cd7-0a7e6acda99a" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.988 [INFO][4636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" iface="eth0" netns="/var/run/netns/cni-84e36a05-cdf2-da66-3cd7-0a7e6acda99a" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.988 [INFO][4636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" iface="eth0" netns="/var/run/netns/cni-84e36a05-cdf2-da66-3cd7-0a7e6acda99a" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.988 [INFO][4636] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:10.988 [INFO][4636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.020 [INFO][4654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.020 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.026 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.035 [WARNING][4654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.036 [INFO][4654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.038 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:11.040658 containerd[1459]: 2025-05-08 05:45:11.039 [INFO][4636] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:11.041323 containerd[1459]: time="2025-05-08T05:45:11.041187006Z" level=info msg="TearDown network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" successfully" May 8 05:45:11.041323 containerd[1459]: time="2025-05-08T05:45:11.041215352Z" level=info msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" returns successfully" May 8 05:45:11.042151 containerd[1459]: time="2025-05-08T05:45:11.041968700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787966c4fb-2244q,Uid:d3c40b71-a013-43d8-b8d8-e3eec48008e2,Namespace:calico-system,Attempt:1,}" May 8 05:45:11.209060 systemd-networkd[1373]: cali6b919ecd690: Link UP May 8 05:45:11.211097 systemd-networkd[1373]: cali6b919ecd690: Gained carrier May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.107 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0 coredns-6f6b679f8f- kube-system 8e593293-d978-473a-ae19-5154bba363a6 846 0 2025-05-08 05:44:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal coredns-6f6b679f8f-v9cpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b919ecd690 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.107 [INFO][4667] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.144 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" HandleID="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.159 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" HandleID="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002932e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"coredns-6f6b679f8f-v9cpg", "timestamp":"2025-05-08 05:45:11.143988102 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.159 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.159 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.159 [INFO][4692] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.163 [INFO][4692] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.167 [INFO][4692] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.175 [INFO][4692] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.178 [INFO][4692] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.181 [INFO][4692] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.181 [INFO][4692] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.183 [INFO][4692] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.188 [INFO][4692] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.199 [INFO][4692] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.199 [INFO][4692] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.199 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:11.238130 containerd[1459]: 2025-05-08 05:45:11.199 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" HandleID="k8s-pod-network.613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.202 [INFO][4667] cni-plugin/k8s.go 386: Populated endpoint ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e593293-d978-473a-ae19-5154bba363a6", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-v9cpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b919ecd690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.202 [INFO][4667] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.134/32] ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.202 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b919ecd690 ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.211 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.213 [INFO][4667] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e593293-d978-473a-ae19-5154bba363a6", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af", Pod:"coredns-6f6b679f8f-v9cpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b919ecd690", MAC:"06:1e:6a:75:e8:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:11.239184 containerd[1459]: 2025-05-08 05:45:11.233 [INFO][4667] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9cpg" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:11.278726 systemd[1]: run-containerd-runc-k8s.io-cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c-runc.hpAs1A.mount: Deactivated successfully. May 8 05:45:11.278836 systemd[1]: run-netns-cni\x2d84e36a05\x2dcdf2\x2dda66\x2d3cd7\x2d0a7e6acda99a.mount: Deactivated successfully. May 8 05:45:11.278902 systemd[1]: run-netns-cni\x2d0abcd637\x2dbfc7\x2d586c\x2d85f2\x2d45d3d82a92f7.mount: Deactivated successfully. May 8 05:45:11.306769 containerd[1459]: time="2025-05-08T05:45:11.296100539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:11.306769 containerd[1459]: time="2025-05-08T05:45:11.296200296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:11.306769 containerd[1459]: time="2025-05-08T05:45:11.296213632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:11.306769 containerd[1459]: time="2025-05-08T05:45:11.296313109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:11.347071 systemd[1]: Started cri-containerd-613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af.scope - libcontainer container 613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af. May 8 05:45:11.429386 systemd-networkd[1373]: cali4ae2dbafac9: Link UP May 8 05:45:11.431467 systemd-networkd[1373]: cali4ae2dbafac9: Gained carrier May 8 05:45:11.470789 containerd[1459]: time="2025-05-08T05:45:11.470676960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9cpg,Uid:8e593293-d978-473a-ae19-5154bba363a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af\"" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.120 [INFO][4677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0 calico-kube-controllers-787966c4fb- calico-system d3c40b71-a013-43d8-b8d8-e3eec48008e2 845 0 2025-05-08 05:44:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:787966c4fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-kube-controllers-787966c4fb-2244q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4ae2dbafac9 [] []}} ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.121 [INFO][4677] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.164 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.175 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba5b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-kube-controllers-787966c4fb-2244q", "timestamp":"2025-05-08 05:45:11.164499179 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.176 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.199 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.201 [INFO][4698] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.265 [INFO][4698] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.287 [INFO][4698] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.310 [INFO][4698] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.320 [INFO][4698] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.327 [INFO][4698] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.327 [INFO][4698] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.339 [INFO][4698] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718 May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.353 [INFO][4698] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.409 [INFO][4698] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.135/26] block=192.168.47.128/26 handle="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.409 [INFO][4698] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.135/26] handle="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.409 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:11.473866 containerd[1459]: 2025-05-08 05:45:11.409 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.135/26] IPv6=[] ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.412 [INFO][4677] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0", GenerateName:"calico-kube-controllers-787966c4fb-", Namespace:"calico-system", SelfLink:"", UID:"d3c40b71-a013-43d8-b8d8-e3eec48008e2", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787966c4fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-kube-controllers-787966c4fb-2244q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ae2dbafac9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.412 [INFO][4677] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.135/32] ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.413 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ae2dbafac9 ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.435 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.438 [INFO][4677] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0", GenerateName:"calico-kube-controllers-787966c4fb-", Namespace:"calico-system", SelfLink:"", UID:"d3c40b71-a013-43d8-b8d8-e3eec48008e2", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787966c4fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718", Pod:"calico-kube-controllers-787966c4fb-2244q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ae2dbafac9", MAC:"8e:bd:88:a8:97:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:11.474433 containerd[1459]: 2025-05-08 05:45:11.467 [INFO][4677] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Namespace="calico-system" Pod="calico-kube-controllers-787966c4fb-2244q" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:11.489513 containerd[1459]: time="2025-05-08T05:45:11.489412348Z" level=info msg="CreateContainer within sandbox \"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 05:45:11.493574 systemd-networkd[1373]: calif143a07a0c2: Gained IPv6LL May 8 05:45:11.552814 containerd[1459]: time="2025-05-08T05:45:11.552743730Z" level=info msg="CreateContainer within sandbox \"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8589f603742024be193a0b8fe3974d731f5f758a778171347c96a9ec6d007c7e\"" May 8 05:45:11.555831 containerd[1459]: time="2025-05-08T05:45:11.555715701Z" level=info msg="StartContainer for \"8589f603742024be193a0b8fe3974d731f5f758a778171347c96a9ec6d007c7e\"" May 8 05:45:11.567216 containerd[1459]: time="2025-05-08T05:45:11.558304476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:11.567216 containerd[1459]: time="2025-05-08T05:45:11.558365918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:11.567216 containerd[1459]: time="2025-05-08T05:45:11.558385778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:11.567216 containerd[1459]: time="2025-05-08T05:45:11.559183453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:11.591613 systemd[1]: Started cri-containerd-8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718.scope - libcontainer container 8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718. May 8 05:45:11.641577 systemd[1]: Started cri-containerd-8589f603742024be193a0b8fe3974d731f5f758a778171347c96a9ec6d007c7e.scope - libcontainer container 8589f603742024be193a0b8fe3974d731f5f758a778171347c96a9ec6d007c7e. May 8 05:45:11.690948 containerd[1459]: time="2025-05-08T05:45:11.690879561Z" level=info msg="StartContainer for \"8589f603742024be193a0b8fe3974d731f5f758a778171347c96a9ec6d007c7e\" returns successfully" May 8 05:45:11.706310 containerd[1459]: time="2025-05-08T05:45:11.705585046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787966c4fb-2244q,Uid:d3c40b71-a013-43d8-b8d8-e3eec48008e2,Namespace:calico-system,Attempt:1,} returns sandbox id \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\"" May 8 05:45:11.813600 systemd-networkd[1373]: cali128c03178f2: Gained IPv6LL May 8 05:45:12.133639 systemd-networkd[1373]: cali38de17fc75c: Gained IPv6LL May 8 05:45:12.238315 kubelet[2596]: I0508 05:45:12.238246 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v9cpg" podStartSLOduration=38.238212716 podStartE2EDuration="38.238212716s" podCreationTimestamp="2025-05-08 05:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:12.238053703 +0000 UTC m=+44.514654916" watchObservedRunningTime="2025-05-08 05:45:12.238212716 +0000 UTC m=+44.514813929" May 8 05:45:12.277849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986601418.mount: Deactivated successfully. May 8 05:45:12.901728 systemd-networkd[1373]: cali6b919ecd690: Gained IPv6LL May 8 05:45:13.157615 systemd-networkd[1373]: cali4ae2dbafac9: Gained IPv6LL May 8 05:45:14.088763 containerd[1459]: time="2025-05-08T05:45:14.088568631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:14.091521 containerd[1459]: time="2025-05-08T05:45:14.091355396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 05:45:14.093099 containerd[1459]: time="2025-05-08T05:45:14.092977749Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:14.100871 containerd[1459]: time="2025-05-08T05:45:14.100698303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:14.105534 containerd[1459]: time="2025-05-08T05:45:14.104425788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.694284013s" May 8 05:45:14.105534 containerd[1459]: time="2025-05-08T05:45:14.104574612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 05:45:14.122413 containerd[1459]: time="2025-05-08T05:45:14.122348771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 05:45:14.125037 containerd[1459]: time="2025-05-08T05:45:14.124750177Z" level=info msg="CreateContainer within sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:45:14.161747 containerd[1459]: time="2025-05-08T05:45:14.161656268Z" level=info msg="CreateContainer within sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\"" May 8 05:45:14.165634 containerd[1459]: time="2025-05-08T05:45:14.165026370Z" level=info msg="StartContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\"" May 8 05:45:14.168780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278445300.mount: Deactivated successfully. May 8 05:45:14.205571 systemd[1]: Started cri-containerd-81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f.scope - libcontainer container 81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f. May 8 05:45:14.251789 containerd[1459]: time="2025-05-08T05:45:14.251735024Z" level=info msg="StartContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" returns successfully" May 8 05:45:16.240322 kubelet[2596]: I0508 05:45:16.240252 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:45:16.642405 containerd[1459]: time="2025-05-08T05:45:16.642358283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:16.644350 containerd[1459]: time="2025-05-08T05:45:16.644310827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 05:45:16.645589 containerd[1459]: time="2025-05-08T05:45:16.645566592Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:16.648842 containerd[1459]: time="2025-05-08T05:45:16.648514720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:16.649406 containerd[1459]: time="2025-05-08T05:45:16.649371442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.526956801s" May 8 05:45:16.649476 containerd[1459]: time="2025-05-08T05:45:16.649407692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 05:45:16.651166 containerd[1459]: time="2025-05-08T05:45:16.651143380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 05:45:16.652991 containerd[1459]: time="2025-05-08T05:45:16.652837567Z" level=info msg="CreateContainer within sandbox \"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 05:45:16.674573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848329690.mount: Deactivated successfully. May 8 05:45:16.683076 containerd[1459]: time="2025-05-08T05:45:16.683044950Z" level=info msg="CreateContainer within sandbox \"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d88ab899419ee270e557226146f2e87fd53d1ecb986aa5be65857a00b47bd92\"" May 8 05:45:16.688310 containerd[1459]: time="2025-05-08T05:45:16.688215510Z" level=info msg="StartContainer for \"0d88ab899419ee270e557226146f2e87fd53d1ecb986aa5be65857a00b47bd92\"" May 8 05:45:16.722632 systemd[1]: Started cri-containerd-0d88ab899419ee270e557226146f2e87fd53d1ecb986aa5be65857a00b47bd92.scope - libcontainer container 0d88ab899419ee270e557226146f2e87fd53d1ecb986aa5be65857a00b47bd92. May 8 05:45:16.787903 containerd[1459]: time="2025-05-08T05:45:16.787314281Z" level=info msg="StartContainer for \"0d88ab899419ee270e557226146f2e87fd53d1ecb986aa5be65857a00b47bd92\" returns successfully" May 8 05:45:17.185866 containerd[1459]: time="2025-05-08T05:45:17.185737448Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:17.188214 containerd[1459]: time="2025-05-08T05:45:17.187729344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 05:45:17.193261 containerd[1459]: time="2025-05-08T05:45:17.193045709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 541.773125ms" May 8 05:45:17.193261 containerd[1459]: time="2025-05-08T05:45:17.193139713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 05:45:17.196288 containerd[1459]: time="2025-05-08T05:45:17.194999830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 05:45:17.199832 containerd[1459]: time="2025-05-08T05:45:17.199770385Z" level=info msg="CreateContainer within sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:45:17.233729 containerd[1459]: time="2025-05-08T05:45:17.233654638Z" level=info msg="CreateContainer within sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\"" May 8 05:45:17.237188 containerd[1459]: time="2025-05-08T05:45:17.234907155Z" level=info msg="StartContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\"" May 8 05:45:17.294614 systemd[1]: Started cri-containerd-05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f.scope - libcontainer container 05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f. May 8 05:45:17.349959 containerd[1459]: time="2025-05-08T05:45:17.349910906Z" level=info msg="StartContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" returns successfully" May 8 05:45:17.724496 containerd[1459]: time="2025-05-08T05:45:17.722203545Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:17.724496 containerd[1459]: time="2025-05-08T05:45:17.723139991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 05:45:17.729261 containerd[1459]: time="2025-05-08T05:45:17.729197329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 534.100889ms" May 8 05:45:17.729514 containerd[1459]: time="2025-05-08T05:45:17.729415788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 05:45:17.733308 containerd[1459]: time="2025-05-08T05:45:17.733027480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 05:45:17.736709 containerd[1459]: time="2025-05-08T05:45:17.735824935Z" level=info msg="CreateContainer within sandbox \"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:45:17.775499 containerd[1459]: time="2025-05-08T05:45:17.775379658Z" level=info msg="CreateContainer within sandbox \"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc9ec08235d773733d534a238144fb1844f8a5d0f4b4dfc42842636b5aebf89a\"" May 8 05:45:17.777630 containerd[1459]: time="2025-05-08T05:45:17.777500206Z" level=info msg="StartContainer for \"cc9ec08235d773733d534a238144fb1844f8a5d0f4b4dfc42842636b5aebf89a\"" May 8 05:45:17.818999 systemd[1]: Started cri-containerd-cc9ec08235d773733d534a238144fb1844f8a5d0f4b4dfc42842636b5aebf89a.scope - libcontainer container cc9ec08235d773733d534a238144fb1844f8a5d0f4b4dfc42842636b5aebf89a. May 8 05:45:17.866874 containerd[1459]: time="2025-05-08T05:45:17.866700981Z" level=info msg="StartContainer for \"cc9ec08235d773733d534a238144fb1844f8a5d0f4b4dfc42842636b5aebf89a\" returns successfully" May 8 05:45:18.346263 kubelet[2596]: I0508 05:45:18.346202 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-ncnc7" podStartSLOduration=33.633547551 podStartE2EDuration="38.346185284s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:45:09.409381991 +0000 UTC m=+41.685983204" lastFinishedPulling="2025-05-08 05:45:14.122019674 +0000 UTC m=+46.398620937" observedRunningTime="2025-05-08 05:45:15.262140598 +0000 UTC m=+47.538741811" watchObservedRunningTime="2025-05-08 05:45:18.346185284 +0000 UTC m=+50.622786497" May 8 05:45:18.373664 kubelet[2596]: I0508 05:45:18.373601 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58bf46f646-r4ddh" podStartSLOduration=30.129835738 podStartE2EDuration="37.373583438s" podCreationTimestamp="2025-05-08 05:44:41 +0000 UTC" firstStartedPulling="2025-05-08 05:45:10.487792664 +0000 UTC m=+42.764393887" lastFinishedPulling="2025-05-08 05:45:17.731540364 +0000 UTC m=+50.008141587" observedRunningTime="2025-05-08 05:45:18.347785609 +0000 UTC m=+50.624386832" watchObservedRunningTime="2025-05-08 05:45:18.373583438 +0000 UTC m=+50.650184661" May 8 05:45:18.373818 kubelet[2596]: I0508 05:45:18.373709 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fd4c9f8d-2swkl" podStartSLOduration=31.6245827 podStartE2EDuration="38.373702612s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:45:10.445416449 +0000 UTC m=+42.722017662" lastFinishedPulling="2025-05-08 05:45:17.194536311 +0000 UTC m=+49.471137574" observedRunningTime="2025-05-08 05:45:18.370209219 +0000 UTC m=+50.646810442" watchObservedRunningTime="2025-05-08 05:45:18.373702612 +0000 UTC m=+50.650303845" May 8 05:45:19.329955 kubelet[2596]: I0508 05:45:19.329631 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:45:19.697631 systemd[1]: Created slice kubepods-besteffort-pod07ee0541_9560_4186_b90f_7816cdb767aa.slice - libcontainer container kubepods-besteffort-pod07ee0541_9560_4186_b90f_7816cdb767aa.slice. May 8 05:45:19.789416 kubelet[2596]: I0508 05:45:19.789305 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07ee0541-9560-4186-b90f-7816cdb767aa-calico-apiserver-certs\") pod \"calico-apiserver-58bf46f646-942qf\" (UID: \"07ee0541-9560-4186-b90f-7816cdb767aa\") " pod="calico-apiserver/calico-apiserver-58bf46f646-942qf" May 8 05:45:19.789416 kubelet[2596]: I0508 05:45:19.789346 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwln\" (UniqueName: \"kubernetes.io/projected/07ee0541-9560-4186-b90f-7816cdb767aa-kube-api-access-fwwln\") pod \"calico-apiserver-58bf46f646-942qf\" (UID: \"07ee0541-9560-4186-b90f-7816cdb767aa\") " pod="calico-apiserver/calico-apiserver-58bf46f646-942qf" May 8 05:45:20.004679 containerd[1459]: time="2025-05-08T05:45:20.004511328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-942qf,Uid:07ee0541-9560-4186-b90f-7816cdb767aa,Namespace:calico-apiserver,Attempt:0,}" May 8 05:45:20.170424 systemd-networkd[1373]: calie3d074a512c: Link UP May 8 05:45:20.171676 systemd-networkd[1373]: calie3d074a512c: Gained carrier May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.079 [INFO][5041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0 calico-apiserver-58bf46f646- calico-apiserver 07ee0541-9560-4186-b90f-7816cdb767aa 928 0 2025-05-08 05:45:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58bf46f646 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-apiserver-58bf46f646-942qf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3d074a512c [] []}} ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.079 [INFO][5041] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.111 [INFO][5053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" HandleID="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.125 [INFO][5053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" HandleID="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003159b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-apiserver-58bf46f646-942qf", "timestamp":"2025-05-08 05:45:20.111634273 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.125 [INFO][5053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.125 [INFO][5053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.125 [INFO][5053] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.127 [INFO][5053] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.133 [INFO][5053] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.138 [INFO][5053] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.140 [INFO][5053] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.143 [INFO][5053] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.143 [INFO][5053] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.145 [INFO][5053] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980 May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.151 [INFO][5053] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.163 [INFO][5053] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.136/26] block=192.168.47.128/26 handle="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.163 [INFO][5053] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.136/26] handle="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.163 [INFO][5053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:20.192393 containerd[1459]: 2025-05-08 05:45:20.163 [INFO][5053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.136/26] IPv6=[] ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" HandleID="k8s-pod-network.950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.166 [INFO][5041] cni-plugin/k8s.go 386: Populated endpoint ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"07ee0541-9560-4186-b90f-7816cdb767aa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-apiserver-58bf46f646-942qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3d074a512c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.166 [INFO][5041] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.136/32] ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.166 [INFO][5041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3d074a512c ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.171 [INFO][5041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.172 [INFO][5041] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"07ee0541-9560-4186-b90f-7816cdb767aa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 45, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980", Pod:"calico-apiserver-58bf46f646-942qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3d074a512c", MAC:"6a:75:e7:97:f3:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:20.193290 containerd[1459]: 2025-05-08 05:45:20.188 [INFO][5041] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980" Namespace="calico-apiserver" Pod="calico-apiserver-58bf46f646-942qf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--942qf-eth0" May 8 05:45:20.228608 containerd[1459]: time="2025-05-08T05:45:20.228404494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:20.228735 containerd[1459]: time="2025-05-08T05:45:20.228688349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:20.228979 containerd[1459]: time="2025-05-08T05:45:20.228741413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:20.229399 containerd[1459]: time="2025-05-08T05:45:20.229351555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:20.254582 systemd[1]: Started cri-containerd-950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980.scope - libcontainer container 950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980. May 8 05:45:20.302271 containerd[1459]: time="2025-05-08T05:45:20.302153102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58bf46f646-942qf,Uid:07ee0541-9560-4186-b90f-7816cdb767aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980\"" May 8 05:45:20.304673 containerd[1459]: time="2025-05-08T05:45:20.304548593Z" level=info msg="CreateContainer within sandbox \"950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:45:20.337058 kubelet[2596]: I0508 05:45:20.336772 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:45:20.338799 containerd[1459]: time="2025-05-08T05:45:20.337646271Z" level=info msg="StopContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" with timeout 30 (s)" May 8 05:45:20.338799 containerd[1459]: time="2025-05-08T05:45:20.338103735Z" level=info msg="Stop container \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" with signal terminated" May 8 05:45:20.343187 containerd[1459]: time="2025-05-08T05:45:20.343122364Z" level=info msg="CreateContainer within sandbox \"950674465b874be0a82bf38dd85ba3d55f7e5ca5544cb08033ce9e0110bde980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ba5c6417a30760837943490d4c2dc9047627cc6cf0d687a1adcf11cd2ab884cf\"" May 8 05:45:20.344898 containerd[1459]: time="2025-05-08T05:45:20.344785845Z" level=info msg="StartContainer for \"ba5c6417a30760837943490d4c2dc9047627cc6cf0d687a1adcf11cd2ab884cf\"" May 8 05:45:20.365797 systemd[1]: cri-containerd-05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f.scope: Deactivated successfully. May 8 05:45:20.379971 systemd[1]: Started cri-containerd-ba5c6417a30760837943490d4c2dc9047627cc6cf0d687a1adcf11cd2ab884cf.scope - libcontainer container ba5c6417a30760837943490d4c2dc9047627cc6cf0d687a1adcf11cd2ab884cf. May 8 05:45:20.564400 containerd[1459]: time="2025-05-08T05:45:20.564266117Z" level=info msg="StartContainer for \"ba5c6417a30760837943490d4c2dc9047627cc6cf0d687a1adcf11cd2ab884cf\" returns successfully" May 8 05:45:20.573047 containerd[1459]: time="2025-05-08T05:45:20.572383302Z" level=info msg="shim disconnected" id=05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f namespace=k8s.io May 8 05:45:20.573047 containerd[1459]: time="2025-05-08T05:45:20.572451765Z" level=warning msg="cleaning up after shim disconnected" id=05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f namespace=k8s.io May 8 05:45:20.573047 containerd[1459]: time="2025-05-08T05:45:20.572462897Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:20.647366 containerd[1459]: time="2025-05-08T05:45:20.647050936Z" level=warning msg="cleanup warnings time=\"2025-05-08T05:45:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 05:45:20.660704 containerd[1459]: time="2025-05-08T05:45:20.659793319Z" level=info msg="StopContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" returns successfully" May 8 05:45:20.660989 containerd[1459]: time="2025-05-08T05:45:20.660798714Z" level=info msg="StopPodSandbox for \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\"" May 8 05:45:20.660989 containerd[1459]: time="2025-05-08T05:45:20.660831780Z" level=info msg="Container to stop \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:20.683210 systemd[1]: cri-containerd-da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2.scope: Deactivated successfully. May 8 05:45:20.723506 containerd[1459]: time="2025-05-08T05:45:20.721887900Z" level=info msg="shim disconnected" id=da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2 namespace=k8s.io May 8 05:45:20.723506 containerd[1459]: time="2025-05-08T05:45:20.723502285Z" level=warning msg="cleaning up after shim disconnected" id=da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2 namespace=k8s.io May 8 05:45:20.724526 containerd[1459]: time="2025-05-08T05:45:20.723514799Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:20.887836 systemd-networkd[1373]: cali128c03178f2: Link DOWN May 8 05:45:20.887844 systemd-networkd[1373]: cali128c03178f2: Lost carrier May 8 05:45:20.907727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f-rootfs.mount: Deactivated successfully. May 8 05:45:20.907824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2-rootfs.mount: Deactivated successfully. May 8 05:45:20.907892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2-shm.mount: Deactivated successfully. May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.885 [INFO][5236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.885 [INFO][5236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" iface="eth0" netns="/var/run/netns/cni-2eb53404-d0af-35d8-96fd-017474dd5edd" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.886 [INFO][5236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" iface="eth0" netns="/var/run/netns/cni-2eb53404-d0af-35d8-96fd-017474dd5edd" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.901 [INFO][5236] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" after=15.102396ms iface="eth0" netns="/var/run/netns/cni-2eb53404-d0af-35d8-96fd-017474dd5edd" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.901 [INFO][5236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.901 [INFO][5236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.947 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.947 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:20.947 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:21.003 [INFO][5243] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:21.003 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:21.005 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:21.011483 containerd[1459]: 2025-05-08 05:45:21.007 [INFO][5236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:21.019461 containerd[1459]: time="2025-05-08T05:45:21.016602633Z" level=info msg="TearDown network for sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" successfully" May 8 05:45:21.019461 containerd[1459]: time="2025-05-08T05:45:21.016631068Z" level=info msg="StopPodSandbox for \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" returns successfully" May 8 05:45:21.018981 systemd[1]: run-netns-cni\x2d2eb53404\x2dd0af\x2d35d8\x2d96fd\x2d017474dd5edd.mount: Deactivated successfully. May 8 05:45:21.020409 containerd[1459]: time="2025-05-08T05:45:21.019776469Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.117 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2eb94ac1-e498-4e48-a950-45810cc88780", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2", Pod:"calico-apiserver-67fd4c9f8d-2swkl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali128c03178f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.117 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.117 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.117 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.117 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.189 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.189 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.190 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.201 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.201 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.204 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:21.208524 containerd[1459]: 2025-05-08 05:45:21.206 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:21.210462 containerd[1459]: time="2025-05-08T05:45:21.209255072Z" level=info msg="TearDown network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" successfully" May 8 05:45:21.210462 containerd[1459]: time="2025-05-08T05:45:21.209281924Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" returns successfully" May 8 05:45:21.341722 kubelet[2596]: I0508 05:45:21.341696 2596 scope.go:117] "RemoveContainer" containerID="05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f" May 8 05:45:21.349484 containerd[1459]: time="2025-05-08T05:45:21.348364272Z" level=info msg="RemoveContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\"" May 8 05:45:21.355244 containerd[1459]: time="2025-05-08T05:45:21.355207542Z" level=info msg="RemoveContainer for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" returns successfully" May 8 05:45:21.355763 kubelet[2596]: I0508 05:45:21.355744 2596 scope.go:117] "RemoveContainer" containerID="05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f" May 8 05:45:21.356563 containerd[1459]: time="2025-05-08T05:45:21.356477101Z" level=error msg="ContainerStatus for \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\": not found" May 8 05:45:21.356825 kubelet[2596]: E0508 05:45:21.356705 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\": not found" containerID="05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f" May 8 05:45:21.359002 kubelet[2596]: I0508 05:45:21.358971 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f"} err="failed to get container status \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\": rpc error: code = NotFound desc = an error occurred when try to find container \"05da272105c9a96eec99252405a87138ba376b41b261a53c7c4c1d530648642f\": not found" May 8 05:45:21.400879 kubelet[2596]: I0508 05:45:21.400315 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2eb94ac1-e498-4e48-a950-45810cc88780-calico-apiserver-certs\") pod \"2eb94ac1-e498-4e48-a950-45810cc88780\" (UID: \"2eb94ac1-e498-4e48-a950-45810cc88780\") " May 8 05:45:21.400879 kubelet[2596]: I0508 05:45:21.400362 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w9ht\" (UniqueName: \"kubernetes.io/projected/2eb94ac1-e498-4e48-a950-45810cc88780-kube-api-access-6w9ht\") pod \"2eb94ac1-e498-4e48-a950-45810cc88780\" (UID: \"2eb94ac1-e498-4e48-a950-45810cc88780\") " May 8 05:45:21.409126 systemd[1]: var-lib-kubelet-pods-2eb94ac1\x2de498\x2d4e48\x2da950\x2d45810cc88780-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6w9ht.mount: Deactivated successfully. May 8 05:45:21.409831 kubelet[2596]: I0508 05:45:21.409533 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb94ac1-e498-4e48-a950-45810cc88780-kube-api-access-6w9ht" (OuterVolumeSpecName: "kube-api-access-6w9ht") pod "2eb94ac1-e498-4e48-a950-45810cc88780" (UID: "2eb94ac1-e498-4e48-a950-45810cc88780"). InnerVolumeSpecName "kube-api-access-6w9ht". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 05:45:21.417135 kubelet[2596]: I0508 05:45:21.417095 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb94ac1-e498-4e48-a950-45810cc88780-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2eb94ac1-e498-4e48-a950-45810cc88780" (UID: "2eb94ac1-e498-4e48-a950-45810cc88780"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 05:45:21.419021 systemd[1]: var-lib-kubelet-pods-2eb94ac1\x2de498\x2d4e48\x2da950\x2d45810cc88780-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 05:45:21.502054 kubelet[2596]: I0508 05:45:21.501156 2596 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2eb94ac1-e498-4e48-a950-45810cc88780-calico-apiserver-certs\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:21.502054 kubelet[2596]: I0508 05:45:21.501188 2596 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6w9ht\" (UniqueName: \"kubernetes.io/projected/2eb94ac1-e498-4e48-a950-45810cc88780-kube-api-access-6w9ht\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:21.541632 systemd-networkd[1373]: calie3d074a512c: Gained IPv6LL May 8 05:45:21.651968 systemd[1]: Removed slice kubepods-besteffort-pod2eb94ac1_e498_4e48_a950_45810cc88780.slice - libcontainer container kubepods-besteffort-pod2eb94ac1_e498_4e48_a950_45810cc88780.slice. May 8 05:45:21.757262 kubelet[2596]: I0508 05:45:21.756533 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58bf46f646-942qf" podStartSLOduration=2.756514605 podStartE2EDuration="2.756514605s" podCreationTimestamp="2025-05-08 05:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:21.362514449 +0000 UTC m=+53.639115673" watchObservedRunningTime="2025-05-08 05:45:21.756514605 +0000 UTC m=+54.033115818" May 8 05:45:21.855820 containerd[1459]: time="2025-05-08T05:45:21.855702261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:21.860795 containerd[1459]: time="2025-05-08T05:45:21.860448999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 05:45:21.869481 containerd[1459]: time="2025-05-08T05:45:21.869226546Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:21.877019 containerd[1459]: time="2025-05-08T05:45:21.876803108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:21.877483 containerd[1459]: time="2025-05-08T05:45:21.877389954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.140765792s" May 8 05:45:21.877483 containerd[1459]: time="2025-05-08T05:45:21.877433159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 05:45:21.881726 containerd[1459]: time="2025-05-08T05:45:21.881430513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 05:45:21.897183 containerd[1459]: time="2025-05-08T05:45:21.895707257Z" level=info msg="CreateContainer within sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 05:45:21.921951 kubelet[2596]: I0508 05:45:21.921909 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb94ac1-e498-4e48-a950-45810cc88780" path="/var/lib/kubelet/pods/2eb94ac1-e498-4e48-a950-45810cc88780/volumes" May 8 05:45:21.929538 containerd[1459]: time="2025-05-08T05:45:21.929489776Z" level=info msg="CreateContainer within sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\"" May 8 05:45:21.930010 containerd[1459]: time="2025-05-08T05:45:21.929982648Z" level=info msg="StartContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\"" May 8 05:45:21.967587 systemd[1]: Started cri-containerd-f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb.scope - libcontainer container f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb. May 8 05:45:22.029134 containerd[1459]: time="2025-05-08T05:45:22.028992332Z" level=info msg="StartContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" returns successfully" May 8 05:45:22.354085 kubelet[2596]: I0508 05:45:22.353910 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:45:22.377756 kubelet[2596]: I0508 05:45:22.374965 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-787966c4fb-2244q" podStartSLOduration=32.202914465 podStartE2EDuration="42.374947927s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:45:11.708251504 +0000 UTC m=+43.984852717" lastFinishedPulling="2025-05-08 05:45:21.880284956 +0000 UTC m=+54.156886179" observedRunningTime="2025-05-08 05:45:22.373734731 +0000 UTC m=+54.650335954" watchObservedRunningTime="2025-05-08 05:45:22.374947927 +0000 UTC m=+54.651549150" May 8 05:45:24.317370 containerd[1459]: time="2025-05-08T05:45:24.317167089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:24.319719 containerd[1459]: time="2025-05-08T05:45:24.319620556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 05:45:24.320491 containerd[1459]: time="2025-05-08T05:45:24.320221495Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:24.323712 containerd[1459]: time="2025-05-08T05:45:24.323658817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:45:24.325680 containerd[1459]: time="2025-05-08T05:45:24.325641607Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.44416316s" May 8 05:45:24.325825 containerd[1459]: time="2025-05-08T05:45:24.325759437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 05:45:24.329555 containerd[1459]: time="2025-05-08T05:45:24.329418189Z" level=info msg="CreateContainer within sandbox \"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 05:45:24.361174 containerd[1459]: time="2025-05-08T05:45:24.361065187Z" level=info msg="CreateContainer within sandbox \"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cbbc560599e3eee6592fbcbd1fdcb8b8e0f1e0f31ce6025885470e37d1c3e121\"" May 8 05:45:24.361658 containerd[1459]: time="2025-05-08T05:45:24.361565782Z" level=info msg="StartContainer for \"cbbc560599e3eee6592fbcbd1fdcb8b8e0f1e0f31ce6025885470e37d1c3e121\"" May 8 05:45:24.407603 systemd[1]: Started cri-containerd-cbbc560599e3eee6592fbcbd1fdcb8b8e0f1e0f31ce6025885470e37d1c3e121.scope - libcontainer container cbbc560599e3eee6592fbcbd1fdcb8b8e0f1e0f31ce6025885470e37d1c3e121. May 8 05:45:24.452484 containerd[1459]: time="2025-05-08T05:45:24.451981457Z" level=info msg="StartContainer for \"cbbc560599e3eee6592fbcbd1fdcb8b8e0f1e0f31ce6025885470e37d1c3e121\" returns successfully" May 8 05:45:25.022839 kubelet[2596]: I0508 05:45:25.022766 2596 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 05:45:25.022839 kubelet[2596]: I0508 05:45:25.022836 2596 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 05:45:25.403524 kubelet[2596]: I0508 05:45:25.403351 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dxgvc" podStartSLOduration=30.808922917 podStartE2EDuration="45.403317722s" podCreationTimestamp="2025-05-08 05:44:40 +0000 UTC" firstStartedPulling="2025-05-08 05:45:09.732369738 +0000 UTC m=+42.008970951" lastFinishedPulling="2025-05-08 05:45:24.326764493 +0000 UTC m=+56.603365756" observedRunningTime="2025-05-08 05:45:25.39825721 +0000 UTC m=+57.674858483" watchObservedRunningTime="2025-05-08 05:45:25.403317722 +0000 UTC m=+57.679918985" May 8 05:45:27.921985 containerd[1459]: time="2025-05-08T05:45:27.921353555Z" level=info msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:27.992 [WARNING][5403] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0", GenerateName:"calico-kube-controllers-787966c4fb-", Namespace:"calico-system", SelfLink:"", UID:"d3c40b71-a013-43d8-b8d8-e3eec48008e2", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787966c4fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718", Pod:"calico-kube-controllers-787966c4fb-2244q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ae2dbafac9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:27.992 [INFO][5403] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:27.992 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" iface="eth0" netns="" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:27.992 [INFO][5403] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:27.992 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.029 [INFO][5411] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.029 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.030 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.037 [WARNING][5411] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.037 [INFO][5411] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.039 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.041872 containerd[1459]: 2025-05-08 05:45:28.040 [INFO][5403] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.041872 containerd[1459]: time="2025-05-08T05:45:28.041842425Z" level=info msg="TearDown network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" successfully" May 8 05:45:28.041872 containerd[1459]: time="2025-05-08T05:45:28.041866773Z" level=info msg="StopPodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" returns successfully" May 8 05:45:28.043102 containerd[1459]: time="2025-05-08T05:45:28.042457058Z" level=info msg="RemovePodSandbox for \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" May 8 05:45:28.043102 containerd[1459]: time="2025-05-08T05:45:28.042489861Z" level=info msg="Forcibly stopping sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\"" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.083 [WARNING][5429] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0", GenerateName:"calico-kube-controllers-787966c4fb-", Namespace:"calico-system", SelfLink:"", UID:"d3c40b71-a013-43d8-b8d8-e3eec48008e2", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787966c4fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718", Pod:"calico-kube-controllers-787966c4fb-2244q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ae2dbafac9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.083 [INFO][5429] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.083 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" iface="eth0" netns="" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.083 [INFO][5429] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.083 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.107 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.107 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.107 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.115 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.115 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" HandleID="k8s-pod-network.f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.117 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.120267 containerd[1459]: 2025-05-08 05:45:28.118 [INFO][5429] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781" May 8 05:45:28.120267 containerd[1459]: time="2025-05-08T05:45:28.119643275Z" level=info msg="TearDown network for sandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" successfully" May 8 05:45:28.125832 containerd[1459]: time="2025-05-08T05:45:28.125745553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.125832 containerd[1459]: time="2025-05-08T05:45:28.125817312Z" level=info msg="RemovePodSandbox \"f21e17135dfe0f7e753052287b948a7720f43488cd04786f407ec2d13bb3e781\" returns successfully" May 8 05:45:28.126500 containerd[1459]: time="2025-05-08T05:45:28.126403149Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.173 [WARNING][5454] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.174 [INFO][5454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.174 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.174 [INFO][5454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.174 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.194 [INFO][5461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.194 [INFO][5461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.194 [INFO][5461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.203 [WARNING][5461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.204 [INFO][5461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.208 [INFO][5461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.211832 containerd[1459]: 2025-05-08 05:45:28.210 [INFO][5454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.211832 containerd[1459]: time="2025-05-08T05:45:28.211690612Z" level=info msg="TearDown network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" successfully" May 8 05:45:28.211832 containerd[1459]: time="2025-05-08T05:45:28.211728665Z" level=info msg="StopPodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" returns successfully" May 8 05:45:28.213417 containerd[1459]: time="2025-05-08T05:45:28.212365881Z" level=info msg="RemovePodSandbox for \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:45:28.213417 containerd[1459]: time="2025-05-08T05:45:28.212414597Z" level=info msg="Forcibly stopping sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\"" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.251 [WARNING][5480] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.251 [INFO][5480] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.251 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" iface="eth0" netns="" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.251 [INFO][5480] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.251 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.272 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.272 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.272 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.278 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.279 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" HandleID="k8s-pod-network.9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.280 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.282826 containerd[1459]: 2025-05-08 05:45:28.281 [INFO][5480] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be" May 8 05:45:28.283511 containerd[1459]: time="2025-05-08T05:45:28.282829317Z" level=info msg="TearDown network for sandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" successfully" May 8 05:45:28.293452 containerd[1459]: time="2025-05-08T05:45:28.292091034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.293452 containerd[1459]: time="2025-05-08T05:45:28.292394704Z" level=info msg="RemovePodSandbox \"9efa1582a7db7b4906e04797c5a435e8efaff196765ad93e4d77dc93c15526be\" returns successfully" May 8 05:45:28.293727 containerd[1459]: time="2025-05-08T05:45:28.293686199Z" level=info msg="StopPodSandbox for \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\"" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.356 [WARNING][5505] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.356 [INFO][5505] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.356 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" iface="eth0" netns="" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.356 [INFO][5505] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.356 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.387 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.387 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.388 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.395 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.395 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.397 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.399198 containerd[1459]: 2025-05-08 05:45:28.398 [INFO][5505] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.399621 containerd[1459]: time="2025-05-08T05:45:28.399219167Z" level=info msg="TearDown network for sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" successfully" May 8 05:45:28.399621 containerd[1459]: time="2025-05-08T05:45:28.399245769Z" level=info msg="StopPodSandbox for \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" returns successfully" May 8 05:45:28.399915 containerd[1459]: time="2025-05-08T05:45:28.399890750Z" level=info msg="RemovePodSandbox for \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\"" May 8 05:45:28.400000 containerd[1459]: time="2025-05-08T05:45:28.399983049Z" level=info msg="Forcibly stopping sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\"" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.435 [WARNING][5538] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.435 [INFO][5538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.436 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" iface="eth0" netns="" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.436 [INFO][5538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.436 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.456 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.456 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.456 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.463 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.463 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" HandleID="k8s-pod-network.da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--2swkl-eth0" May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.465 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.467761 containerd[1459]: 2025-05-08 05:45:28.466 [INFO][5538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2" May 8 05:45:28.469816 containerd[1459]: time="2025-05-08T05:45:28.469042180Z" level=info msg="TearDown network for sandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" successfully" May 8 05:45:28.473880 containerd[1459]: time="2025-05-08T05:45:28.473851580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.474007 containerd[1459]: time="2025-05-08T05:45:28.473991030Z" level=info msg="RemovePodSandbox \"da10b02117cf413f5f069addc0447285a64baefc54df2f82a5cea751b622b1f2\" returns successfully" May 8 05:45:28.474579 containerd[1459]: time="2025-05-08T05:45:28.474554724Z" level=info msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.515 [WARNING][5563] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c6acdca-0e5b-443d-8401-07d05363600e", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c", Pod:"calico-apiserver-58bf46f646-r4ddh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38de17fc75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.516 [INFO][5563] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.516 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" iface="eth0" netns="" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.516 [INFO][5563] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.516 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.539 [INFO][5570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.540 [INFO][5570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.540 [INFO][5570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.546 [WARNING][5570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.546 [INFO][5570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.548 [INFO][5570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.550630 containerd[1459]: 2025-05-08 05:45:28.549 [INFO][5563] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.551460 containerd[1459]: time="2025-05-08T05:45:28.551178791Z" level=info msg="TearDown network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" successfully" May 8 05:45:28.551460 containerd[1459]: time="2025-05-08T05:45:28.551222486Z" level=info msg="StopPodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" returns successfully" May 8 05:45:28.551985 containerd[1459]: time="2025-05-08T05:45:28.551956740Z" level=info msg="RemovePodSandbox for \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" May 8 05:45:28.552045 containerd[1459]: time="2025-05-08T05:45:28.551992048Z" level=info msg="Forcibly stopping sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\"" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.590 [WARNING][5588] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0", GenerateName:"calico-apiserver-58bf46f646-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c6acdca-0e5b-443d-8401-07d05363600e", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58bf46f646", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"cd0dabc74ea840ec81b161b9c048ae0d40efd1475aad1b5493a248397109e37c", Pod:"calico-apiserver-58bf46f646-r4ddh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38de17fc75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.590 [INFO][5588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.590 [INFO][5588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" iface="eth0" netns="" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.590 [INFO][5588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.590 [INFO][5588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.611 [INFO][5595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.611 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.611 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.618 [WARNING][5595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.618 [INFO][5595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" HandleID="k8s-pod-network.463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--58bf46f646--r4ddh-eth0" May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.621 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.623183 containerd[1459]: 2025-05-08 05:45:28.622 [INFO][5588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593" May 8 05:45:28.623698 containerd[1459]: time="2025-05-08T05:45:28.623230146Z" level=info msg="TearDown network for sandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" successfully" May 8 05:45:28.628467 containerd[1459]: time="2025-05-08T05:45:28.628381620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.628593 containerd[1459]: time="2025-05-08T05:45:28.628548012Z" level=info msg="RemovePodSandbox \"463aa026685f7bc0a82e95128cd6acfc82f8f9b9a5c111e428a4ffe42171f593\" returns successfully" May 8 05:45:28.629463 containerd[1459]: time="2025-05-08T05:45:28.629303017Z" level=info msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.669 [WARNING][5613] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fa364b74-657d-49c1-9a18-1f21f741d4df", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50", Pod:"coredns-6f6b679f8f-kp8kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecfd0431b13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.669 [INFO][5613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.669 [INFO][5613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" iface="eth0" netns="" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.669 [INFO][5613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.669 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.692 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.692 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.692 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.699 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.699 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.701 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.703889 containerd[1459]: 2025-05-08 05:45:28.702 [INFO][5613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.704868 containerd[1459]: time="2025-05-08T05:45:28.703908929Z" level=info msg="TearDown network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" successfully" May 8 05:45:28.704868 containerd[1459]: time="2025-05-08T05:45:28.703933637Z" level=info msg="StopPodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" returns successfully" May 8 05:45:28.704868 containerd[1459]: time="2025-05-08T05:45:28.704373780Z" level=info msg="RemovePodSandbox for \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" May 8 05:45:28.704868 containerd[1459]: time="2025-05-08T05:45:28.704410162Z" level=info msg="Forcibly stopping sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\"" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.740 [WARNING][5639] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fa364b74-657d-49c1-9a18-1f21f741d4df", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"3274004ae0d5c1918b9a067612d927d5759bf9e87943869aa6ab78d2d5f87e50", Pod:"coredns-6f6b679f8f-kp8kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliecfd0431b13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.741 [INFO][5639] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.741 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" iface="eth0" netns="" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.741 [INFO][5639] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.741 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.761 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.761 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.761 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.768 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.768 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" HandleID="k8s-pod-network.b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--kp8kd-eth0" May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.770 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.772205 containerd[1459]: 2025-05-08 05:45:28.771 [INFO][5639] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e" May 8 05:45:28.772205 containerd[1459]: time="2025-05-08T05:45:28.772172537Z" level=info msg="TearDown network for sandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" successfully" May 8 05:45:28.777244 containerd[1459]: time="2025-05-08T05:45:28.777212423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.777356 containerd[1459]: time="2025-05-08T05:45:28.777273642Z" level=info msg="RemovePodSandbox \"b62b7ba9ee26b91150a15057f05ad295a29cc35f988baf9950bfd2cacf90a94e\" returns successfully" May 8 05:45:28.778054 containerd[1459]: time="2025-05-08T05:45:28.777802609Z" level=info msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.816 [WARNING][5664] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2af1a327-7716-4aad-bb55-55682f8973c2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94", Pod:"calico-apiserver-67fd4c9f8d-ncnc7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e42d561552", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.816 [INFO][5664] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.816 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" iface="eth0" netns="" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.816 [INFO][5664] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.817 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.838 [INFO][5671] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.839 [INFO][5671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.839 [INFO][5671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.846 [WARNING][5671] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.846 [INFO][5671] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.847 [INFO][5671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.849779 containerd[1459]: 2025-05-08 05:45:28.848 [INFO][5664] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.850464 containerd[1459]: time="2025-05-08T05:45:28.849814167Z" level=info msg="TearDown network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" successfully" May 8 05:45:28.850464 containerd[1459]: time="2025-05-08T05:45:28.849838034Z" level=info msg="StopPodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" returns successfully" May 8 05:45:28.850464 containerd[1459]: time="2025-05-08T05:45:28.850275923Z" level=info msg="RemovePodSandbox for \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" May 8 05:45:28.850464 containerd[1459]: time="2025-05-08T05:45:28.850300461Z" level=info msg="Forcibly stopping sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\"" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.888 [WARNING][5689] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0", GenerateName:"calico-apiserver-67fd4c9f8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"2af1a327-7716-4aad-bb55-55682f8973c2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fd4c9f8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94", Pod:"calico-apiserver-67fd4c9f8d-ncnc7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e42d561552", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.889 [INFO][5689] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.889 [INFO][5689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" iface="eth0" netns="" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.889 [INFO][5689] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.889 [INFO][5689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.910 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.910 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.910 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.919 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.919 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" HandleID="k8s-pod-network.5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.922 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:28.924596 containerd[1459]: 2025-05-08 05:45:28.923 [INFO][5689] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f" May 8 05:45:28.925258 containerd[1459]: time="2025-05-08T05:45:28.924600489Z" level=info msg="TearDown network for sandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" successfully" May 8 05:45:28.928069 containerd[1459]: time="2025-05-08T05:45:28.928040612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:28.928208 containerd[1459]: time="2025-05-08T05:45:28.928102422Z" level=info msg="RemovePodSandbox \"5dafee88cc720518d6f33698ecaef052ff362d3e6cdabb472286e6f2e969ce4f\" returns successfully" May 8 05:45:28.928709 containerd[1459]: time="2025-05-08T05:45:28.928631148Z" level=info msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.966 [WARNING][5715] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71d8c7d2-10e7-4c65-9044-49340af78942", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121", Pod:"csi-node-driver-dxgvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif143a07a0c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.967 [INFO][5715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.967 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" iface="eth0" netns="" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.967 [INFO][5715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.967 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.986 [INFO][5722] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.986 [INFO][5722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.986 [INFO][5722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.998 [WARNING][5722] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:28.998 [INFO][5722] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:29.000 [INFO][5722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:29.003091 containerd[1459]: 2025-05-08 05:45:29.001 [INFO][5715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.003714 containerd[1459]: time="2025-05-08T05:45:29.003131460Z" level=info msg="TearDown network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" successfully" May 8 05:45:29.003714 containerd[1459]: time="2025-05-08T05:45:29.003156199Z" level=info msg="StopPodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" returns successfully" May 8 05:45:29.004277 containerd[1459]: time="2025-05-08T05:45:29.003971068Z" level=info msg="RemovePodSandbox for \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" May 8 05:45:29.004277 containerd[1459]: time="2025-05-08T05:45:29.004008721Z" level=info msg="Forcibly stopping sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\"" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.041 [WARNING][5740] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71d8c7d2-10e7-4c65-9044-49340af78942", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"9961621c6875786de0dc4cfc79c25dcdaff3abdb614d49cb10eb26aa8aa87121", Pod:"csi-node-driver-dxgvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif143a07a0c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.041 [INFO][5740] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.041 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" iface="eth0" netns="" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.041 [INFO][5740] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.041 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.060 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.061 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.061 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.068 [WARNING][5747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.068 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" HandleID="k8s-pod-network.c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-csi--node--driver--dxgvc-eth0" May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.070 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:29.072605 containerd[1459]: 2025-05-08 05:45:29.071 [INFO][5740] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042" May 8 05:45:29.072605 containerd[1459]: time="2025-05-08T05:45:29.072586861Z" level=info msg="TearDown network for sandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" successfully" May 8 05:45:29.076769 containerd[1459]: time="2025-05-08T05:45:29.076739341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:29.076841 containerd[1459]: time="2025-05-08T05:45:29.076798214Z" level=info msg="RemovePodSandbox \"c7f78dfa8bb540cd459f05aed70b275df360b50a2c46a7d4b81de3388f9dd042\" returns successfully" May 8 05:45:29.077453 containerd[1459]: time="2025-05-08T05:45:29.077413938Z" level=info msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.115 [WARNING][5765] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e593293-d978-473a-ae19-5154bba363a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af", Pod:"coredns-6f6b679f8f-v9cpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b919ecd690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.116 [INFO][5765] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.116 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" iface="eth0" netns="" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.116 [INFO][5765] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.116 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.136 [INFO][5772] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.136 [INFO][5772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.136 [INFO][5772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.143 [WARNING][5772] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.143 [INFO][5772] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.148 [INFO][5772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:29.152305 containerd[1459]: 2025-05-08 05:45:29.149 [INFO][5765] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.153373 containerd[1459]: time="2025-05-08T05:45:29.152697086Z" level=info msg="TearDown network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" successfully" May 8 05:45:29.153373 containerd[1459]: time="2025-05-08T05:45:29.152827478Z" level=info msg="StopPodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" returns successfully" May 8 05:45:29.154362 containerd[1459]: time="2025-05-08T05:45:29.153889467Z" level=info msg="RemovePodSandbox for \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" May 8 05:45:29.154362 containerd[1459]: time="2025-05-08T05:45:29.153915147Z" level=info msg="Forcibly stopping sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\"" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.213 [WARNING][5790] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"8e593293-d978-473a-ae19-5154bba363a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 44, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"613c3f770ebe26b7f490cbdebb2968eb2ebf3fa32fb57c641b150102d616b7af", Pod:"coredns-6f6b679f8f-v9cpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b919ecd690", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.213 [INFO][5790] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.213 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" iface="eth0" netns="" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.213 [INFO][5790] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.213 [INFO][5790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.247 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.247 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.247 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.256 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.256 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" HandleID="k8s-pod-network.c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-coredns--6f6b679f8f--v9cpg-eth0" May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.258 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:29.260573 containerd[1459]: 2025-05-08 05:45:29.259 [INFO][5790] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca" May 8 05:45:29.261005 containerd[1459]: time="2025-05-08T05:45:29.260581656Z" level=info msg="TearDown network for sandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" successfully" May 8 05:45:29.264471 containerd[1459]: time="2025-05-08T05:45:29.264420277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:45:29.265112 containerd[1459]: time="2025-05-08T05:45:29.264499090Z" level=info msg="RemovePodSandbox \"c85b6a15caf21d9f3c9cf4729776068b4602245f39432d228d7a95356dd666ca\" returns successfully" May 8 05:45:32.880597 kubelet[2596]: I0508 05:45:32.879948 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:45:41.345366 containerd[1459]: time="2025-05-08T05:45:41.345302780Z" level=info msg="StopContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" with timeout 300 (s)" May 8 05:45:41.346310 containerd[1459]: time="2025-05-08T05:45:41.346038805Z" level=info msg="Stop container \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" with signal terminated" May 8 05:45:41.491633 containerd[1459]: time="2025-05-08T05:45:41.491481581Z" level=info msg="StopContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" with timeout 30 (s)" May 8 05:45:41.492251 containerd[1459]: time="2025-05-08T05:45:41.492216224Z" level=info msg="Stop container \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" with signal terminated" May 8 05:45:41.512336 systemd[1]: cri-containerd-f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb.scope: Deactivated successfully. May 8 05:45:41.549106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb-rootfs.mount: Deactivated successfully. May 8 05:45:41.567272 containerd[1459]: time="2025-05-08T05:45:41.567068824Z" level=info msg="shim disconnected" id=f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb namespace=k8s.io May 8 05:45:41.567272 containerd[1459]: time="2025-05-08T05:45:41.567125804Z" level=warning msg="cleaning up after shim disconnected" id=f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb namespace=k8s.io May 8 05:45:41.567272 containerd[1459]: time="2025-05-08T05:45:41.567137898Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:41.597935 containerd[1459]: time="2025-05-08T05:45:41.597319126Z" level=info msg="StopContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" with timeout 5 (s)" May 8 05:45:41.598566 containerd[1459]: time="2025-05-08T05:45:41.598423890Z" level=info msg="Stop container \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" with signal terminated" May 8 05:45:41.626637 containerd[1459]: time="2025-05-08T05:45:41.626516252Z" level=info msg="StopContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" returns successfully" May 8 05:45:41.627024 containerd[1459]: time="2025-05-08T05:45:41.626965325Z" level=info msg="StopPodSandbox for \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\"" May 8 05:45:41.627024 containerd[1459]: time="2025-05-08T05:45:41.627005412Z" level=info msg="Container to stop \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:41.635913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718-shm.mount: Deactivated successfully. May 8 05:45:41.638045 systemd[1]: cri-containerd-540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a.scope: Deactivated successfully. May 8 05:45:41.638284 systemd[1]: cri-containerd-540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a.scope: Consumed 2.245s CPU time. May 8 05:45:41.647823 systemd[1]: cri-containerd-8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718.scope: Deactivated successfully. May 8 05:45:41.682333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718-rootfs.mount: Deactivated successfully. May 8 05:45:41.684257 containerd[1459]: time="2025-05-08T05:45:41.683953503Z" level=info msg="shim disconnected" id=8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718 namespace=k8s.io May 8 05:45:41.684257 containerd[1459]: time="2025-05-08T05:45:41.684124051Z" level=warning msg="cleaning up after shim disconnected" id=8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718 namespace=k8s.io May 8 05:45:41.684257 containerd[1459]: time="2025-05-08T05:45:41.684134901Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:41.694333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a-rootfs.mount: Deactivated successfully. May 8 05:45:41.697298 containerd[1459]: time="2025-05-08T05:45:41.697249360Z" level=info msg="shim disconnected" id=540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a namespace=k8s.io May 8 05:45:41.697603 containerd[1459]: time="2025-05-08T05:45:41.697575899Z" level=warning msg="cleaning up after shim disconnected" id=540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a namespace=k8s.io May 8 05:45:41.697689 containerd[1459]: time="2025-05-08T05:45:41.697668777Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:41.745714 containerd[1459]: time="2025-05-08T05:45:41.745513813Z" level=info msg="StopContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" returns successfully" May 8 05:45:41.747110 containerd[1459]: time="2025-05-08T05:45:41.747073692Z" level=info msg="StopPodSandbox for \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\"" May 8 05:45:41.747292 containerd[1459]: time="2025-05-08T05:45:41.747261413Z" level=info msg="Container to stop \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:41.747379 containerd[1459]: time="2025-05-08T05:45:41.747362388Z" level=info msg="Container to stop \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:41.747505 containerd[1459]: time="2025-05-08T05:45:41.747488811Z" level=info msg="Container to stop \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:41.758872 systemd[1]: cri-containerd-5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f.scope: Deactivated successfully. May 8 05:45:41.783188 containerd[1459]: time="2025-05-08T05:45:41.783097165Z" level=info msg="shim disconnected" id=5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f namespace=k8s.io May 8 05:45:41.783188 containerd[1459]: time="2025-05-08T05:45:41.783173172Z" level=warning msg="cleaning up after shim disconnected" id=5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f namespace=k8s.io May 8 05:45:41.783188 containerd[1459]: time="2025-05-08T05:45:41.783184383Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:41.815692 containerd[1459]: time="2025-05-08T05:45:41.815558989Z" level=info msg="TearDown network for sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" successfully" May 8 05:45:41.815692 containerd[1459]: time="2025-05-08T05:45:41.815601541Z" level=info msg="StopPodSandbox for \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" returns successfully" May 8 05:45:41.838272 systemd-networkd[1373]: cali4ae2dbafac9: Link DOWN May 8 05:45:41.838280 systemd-networkd[1373]: cali4ae2dbafac9: Lost carrier May 8 05:45:41.890484 kubelet[2596]: E0508 05:45:41.889398 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db0c66d8-a349-4a7c-a2b1-4bc252479a68" containerName="flexvol-driver" May 8 05:45:41.892412 kubelet[2596]: E0508 05:45:41.891576 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2eb94ac1-e498-4e48-a950-45810cc88780" containerName="calico-apiserver" May 8 05:45:41.893140 kubelet[2596]: E0508 05:45:41.892476 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db0c66d8-a349-4a7c-a2b1-4bc252479a68" containerName="install-cni" May 8 05:45:41.893140 kubelet[2596]: E0508 05:45:41.892488 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db0c66d8-a349-4a7c-a2b1-4bc252479a68" containerName="calico-node" May 8 05:45:41.893140 kubelet[2596]: I0508 05:45:41.892541 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0c66d8-a349-4a7c-a2b1-4bc252479a68" containerName="calico-node" May 8 05:45:41.893140 kubelet[2596]: I0508 05:45:41.892548 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eb94ac1-e498-4e48-a950-45810cc88780" containerName="calico-apiserver" May 8 05:45:41.904067 systemd[1]: Created slice kubepods-besteffort-pod71acb877_817a_4b9f_a4dd_c63370515f5a.slice - libcontainer container kubepods-besteffort-pod71acb877_817a_4b9f_a4dd_c63370515f5a.slice. May 8 05:45:41.955689 kubelet[2596]: I0508 05:45:41.955639 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-log-dir\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.956067 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-run-calico\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.956150 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-lib-calico\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.955720 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.956183 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-xtables-lock\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.956244 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.957141 kubelet[2596]: I0508 05:45:41.956262 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db0c66d8-a349-4a7c-a2b1-4bc252479a68-node-certs\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957667 kubelet[2596]: I0508 05:45:41.956286 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-lib-modules\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957667 kubelet[2596]: I0508 05:45:41.956295 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.957667 kubelet[2596]: I0508 05:45:41.956305 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-bin-dir\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957667 kubelet[2596]: I0508 05:45:41.956329 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.957667 kubelet[2596]: I0508 05:45:41.956353 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956355 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-net-dir\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956393 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-flexvol-driver-host\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956476 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db0c66d8-a349-4a7c-a2b1-4bc252479a68-tigera-ca-bundle\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956516 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-policysync\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956554 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqnwz\" (UniqueName: \"kubernetes.io/projected/db0c66d8-a349-4a7c-a2b1-4bc252479a68-kube-api-access-jqnwz\") pod \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\" (UID: \"db0c66d8-a349-4a7c-a2b1-4bc252479a68\") " May 8 05:45:41.957903 kubelet[2596]: I0508 05:45:41.956648 2596 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-log-dir\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:41.958228 kubelet[2596]: I0508 05:45:41.956670 2596 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-run-calico\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:41.958228 kubelet[2596]: I0508 05:45:41.956689 2596 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-var-lib-calico\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:41.958228 kubelet[2596]: I0508 05:45:41.956714 2596 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-xtables-lock\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:41.958228 kubelet[2596]: I0508 05:45:41.956740 2596 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-bin-dir\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:41.961692 kubelet[2596]: I0508 05:45:41.961266 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.961692 kubelet[2596]: I0508 05:45:41.961540 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.961969 kubelet[2596]: I0508 05:45:41.961802 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-policysync" (OuterVolumeSpecName: "policysync") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.962201 kubelet[2596]: I0508 05:45:41.962176 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 05:45:41.966315 kubelet[2596]: I0508 05:45:41.966224 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0c66d8-a349-4a7c-a2b1-4bc252479a68-kube-api-access-jqnwz" (OuterVolumeSpecName: "kube-api-access-jqnwz") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "kube-api-access-jqnwz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 05:45:41.967060 kubelet[2596]: I0508 05:45:41.966385 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0c66d8-a349-4a7c-a2b1-4bc252479a68-node-certs" (OuterVolumeSpecName: "node-certs") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 05:45:41.973606 kubelet[2596]: I0508 05:45:41.973518 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db0c66d8-a349-4a7c-a2b1-4bc252479a68-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "db0c66d8-a349-4a7c-a2b1-4bc252479a68" (UID: "db0c66d8-a349-4a7c-a2b1-4bc252479a68"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.836 [INFO][5993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.837 [INFO][5993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" iface="eth0" netns="/var/run/netns/cni-d397daf1-3c42-e335-fe70-82dc2d4e09ef" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.837 [INFO][5993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" iface="eth0" netns="/var/run/netns/cni-d397daf1-3c42-e335-fe70-82dc2d4e09ef" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.847 [INFO][5993] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" after=10.448773ms iface="eth0" netns="/var/run/netns/cni-d397daf1-3c42-e335-fe70-82dc2d4e09ef" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.847 [INFO][5993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.847 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.888 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.889 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.889 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.969 [INFO][6025] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.969 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.973 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:41.977549 containerd[1459]: 2025-05-08 05:45:41.975 [INFO][5993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:45:41.979395 containerd[1459]: time="2025-05-08T05:45:41.978558406Z" level=info msg="TearDown network for sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" successfully" May 8 05:45:41.979395 containerd[1459]: time="2025-05-08T05:45:41.978590537Z" level=info msg="StopPodSandbox for \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" returns successfully" May 8 05:45:42.057577 kubelet[2596]: I0508 05:45:42.057475 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-xtables-lock\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.057577 kubelet[2596]: I0508 05:45:42.057548 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-flexvol-driver-host\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.057577 kubelet[2596]: I0508 05:45:42.057579 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71acb877-817a-4b9f-a4dd-c63370515f5a-tigera-ca-bundle\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058293 kubelet[2596]: I0508 05:45:42.057606 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/71acb877-817a-4b9f-a4dd-c63370515f5a-node-certs\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058293 kubelet[2596]: I0508 05:45:42.057627 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-cni-log-dir\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058293 kubelet[2596]: I0508 05:45:42.057649 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnxt\" (UniqueName: \"kubernetes.io/projected/71acb877-817a-4b9f-a4dd-c63370515f5a-kube-api-access-spnxt\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058293 kubelet[2596]: I0508 05:45:42.057731 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-var-run-calico\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058293 kubelet[2596]: I0508 05:45:42.057808 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-lib-modules\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.057857 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-cni-net-dir\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.057905 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-policysync\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.057947 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-var-lib-calico\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.057974 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/71acb877-817a-4b9f-a4dd-c63370515f5a-cni-bin-dir\") pod \"calico-node-69k2h\" (UID: \"71acb877-817a-4b9f-a4dd-c63370515f5a\") " pod="calico-system/calico-node-69k2h" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.058018 2596 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db0c66d8-a349-4a7c-a2b1-4bc252479a68-node-certs\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058524 kubelet[2596]: I0508 05:45:42.058035 2596 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-lib-modules\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058728 kubelet[2596]: I0508 05:45:42.058052 2596 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-cni-net-dir\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058728 kubelet[2596]: I0508 05:45:42.058066 2596 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-flexvol-driver-host\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058728 kubelet[2596]: I0508 05:45:42.058080 2596 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db0c66d8-a349-4a7c-a2b1-4bc252479a68-tigera-ca-bundle\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058728 kubelet[2596]: I0508 05:45:42.058095 2596 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db0c66d8-a349-4a7c-a2b1-4bc252479a68-policysync\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.058728 kubelet[2596]: I0508 05:45:42.058112 2596 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jqnwz\" (UniqueName: \"kubernetes.io/projected/db0c66d8-a349-4a7c-a2b1-4bc252479a68-kube-api-access-jqnwz\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.159272 kubelet[2596]: I0508 05:45:42.159053 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c40b71-a013-43d8-b8d8-e3eec48008e2-tigera-ca-bundle\") pod \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\" (UID: \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\") " May 8 05:45:42.159272 kubelet[2596]: I0508 05:45:42.159139 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnjhk\" (UniqueName: \"kubernetes.io/projected/d3c40b71-a013-43d8-b8d8-e3eec48008e2-kube-api-access-pnjhk\") pod \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\" (UID: \"d3c40b71-a013-43d8-b8d8-e3eec48008e2\") " May 8 05:45:42.175579 kubelet[2596]: I0508 05:45:42.174275 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c40b71-a013-43d8-b8d8-e3eec48008e2-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d3c40b71-a013-43d8-b8d8-e3eec48008e2" (UID: "d3c40b71-a013-43d8-b8d8-e3eec48008e2"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 05:45:42.175579 kubelet[2596]: I0508 05:45:42.175253 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c40b71-a013-43d8-b8d8-e3eec48008e2-kube-api-access-pnjhk" (OuterVolumeSpecName: "kube-api-access-pnjhk") pod "d3c40b71-a013-43d8-b8d8-e3eec48008e2" (UID: "d3c40b71-a013-43d8-b8d8-e3eec48008e2"). InnerVolumeSpecName "kube-api-access-pnjhk". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 05:45:42.208136 containerd[1459]: time="2025-05-08T05:45:42.208038861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69k2h,Uid:71acb877-817a-4b9f-a4dd-c63370515f5a,Namespace:calico-system,Attempt:0,}" May 8 05:45:42.252657 containerd[1459]: time="2025-05-08T05:45:42.252063795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:42.252657 containerd[1459]: time="2025-05-08T05:45:42.252229243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:42.252657 containerd[1459]: time="2025-05-08T05:45:42.252309516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:42.254481 containerd[1459]: time="2025-05-08T05:45:42.254252060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:42.259811 kubelet[2596]: I0508 05:45:42.259741 2596 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c40b71-a013-43d8-b8d8-e3eec48008e2-tigera-ca-bundle\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.259811 kubelet[2596]: I0508 05:45:42.259779 2596 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pnjhk\" (UniqueName: \"kubernetes.io/projected/d3c40b71-a013-43d8-b8d8-e3eec48008e2-kube-api-access-pnjhk\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:42.286780 systemd[1]: Started cri-containerd-4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2.scope - libcontainer container 4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2. May 8 05:45:42.324083 containerd[1459]: time="2025-05-08T05:45:42.323680078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69k2h,Uid:71acb877-817a-4b9f-a4dd-c63370515f5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\"" May 8 05:45:42.334291 containerd[1459]: time="2025-05-08T05:45:42.333815297Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 05:45:42.353241 containerd[1459]: time="2025-05-08T05:45:42.353187640Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce\"" May 8 05:45:42.355035 containerd[1459]: time="2025-05-08T05:45:42.353733349Z" level=info msg="StartContainer for \"19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce\"" May 8 05:45:42.380614 systemd[1]: Started cri-containerd-19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce.scope - libcontainer container 19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce. May 8 05:45:42.416887 containerd[1459]: time="2025-05-08T05:45:42.416610806Z" level=info msg="StartContainer for \"19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce\" returns successfully" May 8 05:45:42.435406 systemd[1]: cri-containerd-19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce.scope: Deactivated successfully. May 8 05:45:42.447845 systemd[1]: var-lib-kubelet-pods-d3c40b71\x2da013\x2d43d8\x2db8d8\x2de3eec48008e2-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. May 8 05:45:42.448036 systemd[1]: run-netns-cni\x2dd397daf1\x2d3c42\x2de335\x2dfe70\x2d82dc2d4e09ef.mount: Deactivated successfully. May 8 05:45:42.448183 systemd[1]: var-lib-kubelet-pods-db0c66d8\x2da349\x2d4a7c\x2da2b1\x2d4bc252479a68-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 8 05:45:42.448297 systemd[1]: var-lib-kubelet-pods-d3c40b71\x2da013\x2d43d8\x2db8d8\x2de3eec48008e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnjhk.mount: Deactivated successfully. May 8 05:45:42.448413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f-rootfs.mount: Deactivated successfully. May 8 05:45:42.448562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f-shm.mount: Deactivated successfully. May 8 05:45:42.448668 systemd[1]: var-lib-kubelet-pods-db0c66d8\x2da349\x2d4a7c\x2da2b1\x2d4bc252479a68-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djqnwz.mount: Deactivated successfully. May 8 05:45:42.448744 systemd[1]: var-lib-kubelet-pods-db0c66d8\x2da349\x2d4a7c\x2da2b1\x2d4bc252479a68-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 8 05:45:42.468361 kubelet[2596]: I0508 05:45:42.468335 2596 scope.go:117] "RemoveContainer" containerID="540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a" May 8 05:45:42.475952 containerd[1459]: time="2025-05-08T05:45:42.475877004Z" level=info msg="RemoveContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\"" May 8 05:45:42.491023 systemd[1]: Removed slice kubepods-besteffort-podd3c40b71_a013_43d8_b8d8_e3eec48008e2.slice - libcontainer container kubepods-besteffort-podd3c40b71_a013_43d8_b8d8_e3eec48008e2.slice. May 8 05:45:42.494332 systemd[1]: Removed slice kubepods-besteffort-poddb0c66d8_a349_4a7c_a2b1_4bc252479a68.slice - libcontainer container kubepods-besteffort-poddb0c66d8_a349_4a7c_a2b1_4bc252479a68.slice. May 8 05:45:42.494504 systemd[1]: kubepods-besteffort-poddb0c66d8_a349_4a7c_a2b1_4bc252479a68.slice: Consumed 2.950s CPU time. May 8 05:45:42.496474 containerd[1459]: time="2025-05-08T05:45:42.495808952Z" level=info msg="RemoveContainer for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" returns successfully" May 8 05:45:42.496822 kubelet[2596]: I0508 05:45:42.496782 2596 scope.go:117] "RemoveContainer" containerID="96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7" May 8 05:45:42.501132 containerd[1459]: time="2025-05-08T05:45:42.500463818Z" level=info msg="RemoveContainer for \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\"" May 8 05:45:42.508495 containerd[1459]: time="2025-05-08T05:45:42.507829194Z" level=info msg="RemoveContainer for \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\" returns successfully" May 8 05:45:42.508645 kubelet[2596]: I0508 05:45:42.508414 2596 scope.go:117] "RemoveContainer" containerID="f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c" May 8 05:45:42.512458 containerd[1459]: time="2025-05-08T05:45:42.511574552Z" level=info msg="RemoveContainer for \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\"" May 8 05:45:42.516898 containerd[1459]: time="2025-05-08T05:45:42.516624207Z" level=info msg="RemoveContainer for \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\" returns successfully" May 8 05:45:42.520681 kubelet[2596]: I0508 05:45:42.520426 2596 scope.go:117] "RemoveContainer" containerID="540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a" May 8 05:45:42.521433 containerd[1459]: time="2025-05-08T05:45:42.521371682Z" level=error msg="ContainerStatus for \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\": not found" May 8 05:45:42.523046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce-rootfs.mount: Deactivated successfully. May 8 05:45:42.532101 kubelet[2596]: E0508 05:45:42.521933 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\": not found" containerID="540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a" May 8 05:45:42.532101 kubelet[2596]: I0508 05:45:42.524896 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a"} err="failed to get container status \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\": rpc error: code = NotFound desc = an error occurred when try to find container \"540da491de1ad98b866ad2deb7dc66c840d533e1f8c0948ed1651b12c197734a\": not found" May 8 05:45:42.532101 kubelet[2596]: I0508 05:45:42.524955 2596 scope.go:117] "RemoveContainer" containerID="96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7" May 8 05:45:42.532101 kubelet[2596]: E0508 05:45:42.526538 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\": not found" containerID="96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7" May 8 05:45:42.532101 kubelet[2596]: I0508 05:45:42.526563 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7"} err="failed to get container status \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\": rpc error: code = NotFound desc = an error occurred when try to find container \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\": not found" May 8 05:45:42.532101 kubelet[2596]: I0508 05:45:42.526582 2596 scope.go:117] "RemoveContainer" containerID="f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c" May 8 05:45:42.532430 containerd[1459]: time="2025-05-08T05:45:42.525350981Z" level=error msg="ContainerStatus for \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96a729365e810a9352bbc6ba6e7cdee9d4c9056ed80a79a2fc14a6cc8361aef7\": not found" May 8 05:45:42.532430 containerd[1459]: time="2025-05-08T05:45:42.526868607Z" level=error msg="ContainerStatus for \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\": not found" May 8 05:45:42.533457 kubelet[2596]: E0508 05:45:42.527845 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\": not found" containerID="f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c" May 8 05:45:42.533457 kubelet[2596]: I0508 05:45:42.527984 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c"} err="failed to get container status \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1bf94e2fe5cebef41bb0325994e6f36ceffb7ec5bae9d7118190733782b015c\": not found" May 8 05:45:42.533457 kubelet[2596]: I0508 05:45:42.528041 2596 scope.go:117] "RemoveContainer" containerID="f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb" May 8 05:45:42.537243 containerd[1459]: time="2025-05-08T05:45:42.537200284Z" level=info msg="RemoveContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\"" May 8 05:45:42.541914 containerd[1459]: time="2025-05-08T05:45:42.541356543Z" level=info msg="shim disconnected" id=19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce namespace=k8s.io May 8 05:45:42.541914 containerd[1459]: time="2025-05-08T05:45:42.541407851Z" level=warning msg="cleaning up after shim disconnected" id=19b69a1f43fcb3961c30a17fada5f8596fcd18ae78c793a84c16a593104323ce namespace=k8s.io May 8 05:45:42.541914 containerd[1459]: time="2025-05-08T05:45:42.541417910Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:42.547116 containerd[1459]: time="2025-05-08T05:45:42.545513321Z" level=info msg="RemoveContainer for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" returns successfully" May 8 05:45:42.553708 kubelet[2596]: I0508 05:45:42.553679 2596 scope.go:117] "RemoveContainer" containerID="f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb" May 8 05:45:42.554254 containerd[1459]: time="2025-05-08T05:45:42.554158867Z" level=error msg="ContainerStatus for \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\": not found" May 8 05:45:42.554772 kubelet[2596]: E0508 05:45:42.554629 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\": not found" containerID="f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb" May 8 05:45:42.556219 kubelet[2596]: I0508 05:45:42.556184 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb"} err="failed to get container status \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4ef1f86002e7307039f3c3cb301949d00080299eb00ea7062ecf09b21da2bdb\": not found" May 8 05:45:42.572623 containerd[1459]: time="2025-05-08T05:45:42.572560796Z" level=warning msg="cleanup warnings time=\"2025-05-08T05:45:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 05:45:42.594386 systemd[1]: cri-containerd-da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed.scope: Deactivated successfully. May 8 05:45:42.637312 containerd[1459]: time="2025-05-08T05:45:42.637259694Z" level=info msg="shim disconnected" id=da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed namespace=k8s.io May 8 05:45:42.637553 containerd[1459]: time="2025-05-08T05:45:42.637533600Z" level=warning msg="cleaning up after shim disconnected" id=da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed namespace=k8s.io May 8 05:45:42.637703 containerd[1459]: time="2025-05-08T05:45:42.637685653Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:42.638286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed-rootfs.mount: Deactivated successfully. May 8 05:45:42.677526 containerd[1459]: time="2025-05-08T05:45:42.676416221Z" level=info msg="StopContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" returns successfully" May 8 05:45:42.680485 containerd[1459]: time="2025-05-08T05:45:42.680447599Z" level=info msg="StopPodSandbox for \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\"" May 8 05:45:42.680603 containerd[1459]: time="2025-05-08T05:45:42.680494960Z" level=info msg="Container to stop \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:45:42.686815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848-shm.mount: Deactivated successfully. May 8 05:45:42.722846 systemd[1]: cri-containerd-d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848.scope: Deactivated successfully. May 8 05:45:42.755082 containerd[1459]: time="2025-05-08T05:45:42.755010787Z" level=info msg="shim disconnected" id=d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848 namespace=k8s.io May 8 05:45:42.755082 containerd[1459]: time="2025-05-08T05:45:42.755062435Z" level=warning msg="cleaning up after shim disconnected" id=d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848 namespace=k8s.io May 8 05:45:42.755082 containerd[1459]: time="2025-05-08T05:45:42.755072094Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:42.756505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848-rootfs.mount: Deactivated successfully. May 8 05:45:42.776395 containerd[1459]: time="2025-05-08T05:45:42.776149665Z" level=warning msg="cleanup warnings time=\"2025-05-08T05:45:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 05:45:42.786661 containerd[1459]: time="2025-05-08T05:45:42.785544280Z" level=info msg="TearDown network for sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" successfully" May 8 05:45:42.786661 containerd[1459]: time="2025-05-08T05:45:42.785590890Z" level=info msg="StopPodSandbox for \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" returns successfully" May 8 05:45:42.963284 kubelet[2596]: I0508 05:45:42.963083 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/135df6c8-ac95-4184-ab1d-8b185f471b6b-typha-certs\") pod \"135df6c8-ac95-4184-ab1d-8b185f471b6b\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " May 8 05:45:42.963284 kubelet[2596]: I0508 05:45:42.963194 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvjlj\" (UniqueName: \"kubernetes.io/projected/135df6c8-ac95-4184-ab1d-8b185f471b6b-kube-api-access-fvjlj\") pod \"135df6c8-ac95-4184-ab1d-8b185f471b6b\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " May 8 05:45:42.963284 kubelet[2596]: I0508 05:45:42.963258 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/135df6c8-ac95-4184-ab1d-8b185f471b6b-tigera-ca-bundle\") pod \"135df6c8-ac95-4184-ab1d-8b185f471b6b\" (UID: \"135df6c8-ac95-4184-ab1d-8b185f471b6b\") " May 8 05:45:42.970533 kubelet[2596]: I0508 05:45:42.970273 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/135df6c8-ac95-4184-ab1d-8b185f471b6b-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "135df6c8-ac95-4184-ab1d-8b185f471b6b" (UID: "135df6c8-ac95-4184-ab1d-8b185f471b6b"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 05:45:42.971526 kubelet[2596]: I0508 05:45:42.971340 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135df6c8-ac95-4184-ab1d-8b185f471b6b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "135df6c8-ac95-4184-ab1d-8b185f471b6b" (UID: "135df6c8-ac95-4184-ab1d-8b185f471b6b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 05:45:42.975017 kubelet[2596]: I0508 05:45:42.974890 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135df6c8-ac95-4184-ab1d-8b185f471b6b-kube-api-access-fvjlj" (OuterVolumeSpecName: "kube-api-access-fvjlj") pod "135df6c8-ac95-4184-ab1d-8b185f471b6b" (UID: "135df6c8-ac95-4184-ab1d-8b185f471b6b"). InnerVolumeSpecName "kube-api-access-fvjlj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 05:45:43.064508 kubelet[2596]: I0508 05:45:43.064317 2596 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fvjlj\" (UniqueName: \"kubernetes.io/projected/135df6c8-ac95-4184-ab1d-8b185f471b6b-kube-api-access-fvjlj\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:43.064508 kubelet[2596]: I0508 05:45:43.064380 2596 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/135df6c8-ac95-4184-ab1d-8b185f471b6b-tigera-ca-bundle\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:43.064508 kubelet[2596]: I0508 05:45:43.064408 2596 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/135df6c8-ac95-4184-ab1d-8b185f471b6b-typha-certs\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:45:43.246503 kubelet[2596]: E0508 05:45:43.245949 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="135df6c8-ac95-4184-ab1d-8b185f471b6b" containerName="calico-typha" May 8 05:45:43.246503 kubelet[2596]: E0508 05:45:43.246112 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d3c40b71-a013-43d8-b8d8-e3eec48008e2" containerName="calico-kube-controllers" May 8 05:45:43.246503 kubelet[2596]: I0508 05:45:43.246227 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c40b71-a013-43d8-b8d8-e3eec48008e2" containerName="calico-kube-controllers" May 8 05:45:43.246503 kubelet[2596]: I0508 05:45:43.246250 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="135df6c8-ac95-4184-ab1d-8b185f471b6b" containerName="calico-typha" May 8 05:45:43.270476 systemd[1]: Created slice kubepods-besteffort-pod510e25f5_dff9_471d_bd02_a5f4b90e9b56.slice - libcontainer container kubepods-besteffort-pod510e25f5_dff9_471d_bd02_a5f4b90e9b56.slice. May 8 05:45:43.367955 kubelet[2596]: I0508 05:45:43.367811 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/510e25f5-dff9-471d-bd02-a5f4b90e9b56-tigera-ca-bundle\") pod \"calico-typha-6bd9b676f4-hwg9b\" (UID: \"510e25f5-dff9-471d-bd02-a5f4b90e9b56\") " pod="calico-system/calico-typha-6bd9b676f4-hwg9b" May 8 05:45:43.367955 kubelet[2596]: I0508 05:45:43.367852 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/510e25f5-dff9-471d-bd02-a5f4b90e9b56-typha-certs\") pod \"calico-typha-6bd9b676f4-hwg9b\" (UID: \"510e25f5-dff9-471d-bd02-a5f4b90e9b56\") " pod="calico-system/calico-typha-6bd9b676f4-hwg9b" May 8 05:45:43.367955 kubelet[2596]: I0508 05:45:43.367875 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjj84\" (UniqueName: \"kubernetes.io/projected/510e25f5-dff9-471d-bd02-a5f4b90e9b56-kube-api-access-tjj84\") pod \"calico-typha-6bd9b676f4-hwg9b\" (UID: \"510e25f5-dff9-471d-bd02-a5f4b90e9b56\") " pod="calico-system/calico-typha-6bd9b676f4-hwg9b" May 8 05:45:43.443648 systemd[1]: var-lib-kubelet-pods-135df6c8\x2dac95\x2d4184\x2dab1d\x2d8b185f471b6b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. May 8 05:45:43.444146 systemd[1]: var-lib-kubelet-pods-135df6c8\x2dac95\x2d4184\x2dab1d\x2d8b185f471b6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvjlj.mount: Deactivated successfully. May 8 05:45:43.444533 systemd[1]: var-lib-kubelet-pods-135df6c8\x2dac95\x2d4184\x2dab1d\x2d8b185f471b6b-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. May 8 05:45:43.528023 containerd[1459]: time="2025-05-08T05:45:43.527846283Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 05:45:43.531990 kubelet[2596]: I0508 05:45:43.531364 2596 scope.go:117] "RemoveContainer" containerID="da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed" May 8 05:45:43.535481 containerd[1459]: time="2025-05-08T05:45:43.535282939Z" level=info msg="RemoveContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\"" May 8 05:45:43.541535 systemd[1]: Removed slice kubepods-besteffort-pod135df6c8_ac95_4184_ab1d_8b185f471b6b.slice - libcontainer container kubepods-besteffort-pod135df6c8_ac95_4184_ab1d_8b185f471b6b.slice. May 8 05:45:43.542931 containerd[1459]: time="2025-05-08T05:45:43.542885162Z" level=info msg="RemoveContainer for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" returns successfully" May 8 05:45:43.544854 kubelet[2596]: I0508 05:45:43.544485 2596 scope.go:117] "RemoveContainer" containerID="da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed" May 8 05:45:43.545331 containerd[1459]: time="2025-05-08T05:45:43.545112580Z" level=error msg="ContainerStatus for \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\": not found" May 8 05:45:43.545604 kubelet[2596]: E0508 05:45:43.545568 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\": not found" containerID="da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed" May 8 05:45:43.545669 kubelet[2596]: I0508 05:45:43.545630 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed"} err="failed to get container status \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"da205c99786250634797b160f814d00ed94c7435472d5645446807e6813b54ed\": not found" May 8 05:45:43.561889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787812568.mount: Deactivated successfully. May 8 05:45:43.571713 containerd[1459]: time="2025-05-08T05:45:43.571554955Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd\"" May 8 05:45:43.573231 containerd[1459]: time="2025-05-08T05:45:43.573042653Z" level=info msg="StartContainer for \"e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd\"" May 8 05:45:43.579399 containerd[1459]: time="2025-05-08T05:45:43.579362874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd9b676f4-hwg9b,Uid:510e25f5-dff9-471d-bd02-a5f4b90e9b56,Namespace:calico-system,Attempt:0,}" May 8 05:45:43.610636 systemd[1]: Started cri-containerd-e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd.scope - libcontainer container e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd. May 8 05:45:43.614350 containerd[1459]: time="2025-05-08T05:45:43.614161871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:43.614350 containerd[1459]: time="2025-05-08T05:45:43.614275530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:43.614350 containerd[1459]: time="2025-05-08T05:45:43.614300998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:43.614659 containerd[1459]: time="2025-05-08T05:45:43.614395580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:43.636917 systemd[1]: Started cri-containerd-9509cc2b61a0bb0fa42d430b5dd22d4085e0eef2fa16a4c971e4dac61d0d3286.scope - libcontainer container 9509cc2b61a0bb0fa42d430b5dd22d4085e0eef2fa16a4c971e4dac61d0d3286. May 8 05:45:43.658700 containerd[1459]: time="2025-05-08T05:45:43.658078071Z" level=info msg="StartContainer for \"e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd\" returns successfully" May 8 05:45:43.701972 containerd[1459]: time="2025-05-08T05:45:43.701924889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd9b676f4-hwg9b,Uid:510e25f5-dff9-471d-bd02-a5f4b90e9b56,Namespace:calico-system,Attempt:0,} returns sandbox id \"9509cc2b61a0bb0fa42d430b5dd22d4085e0eef2fa16a4c971e4dac61d0d3286\"" May 8 05:45:43.716561 containerd[1459]: time="2025-05-08T05:45:43.716353827Z" level=info msg="CreateContainer within sandbox \"9509cc2b61a0bb0fa42d430b5dd22d4085e0eef2fa16a4c971e4dac61d0d3286\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 05:45:43.742134 containerd[1459]: time="2025-05-08T05:45:43.741118449Z" level=info msg="CreateContainer within sandbox \"9509cc2b61a0bb0fa42d430b5dd22d4085e0eef2fa16a4c971e4dac61d0d3286\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"525b52460de1f9ed3677745087aa2ccbfec0c190c5c1219f12214ac82c2b0c51\"" May 8 05:45:43.744326 containerd[1459]: time="2025-05-08T05:45:43.742531392Z" level=info msg="StartContainer for \"525b52460de1f9ed3677745087aa2ccbfec0c190c5c1219f12214ac82c2b0c51\"" May 8 05:45:43.775608 systemd[1]: Started cri-containerd-525b52460de1f9ed3677745087aa2ccbfec0c190c5c1219f12214ac82c2b0c51.scope - libcontainer container 525b52460de1f9ed3677745087aa2ccbfec0c190c5c1219f12214ac82c2b0c51. May 8 05:45:43.840561 containerd[1459]: time="2025-05-08T05:45:43.840385395Z" level=info msg="StartContainer for \"525b52460de1f9ed3677745087aa2ccbfec0c190c5c1219f12214ac82c2b0c51\" returns successfully" May 8 05:45:43.912473 kubelet[2596]: I0508 05:45:43.912119 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135df6c8-ac95-4184-ab1d-8b185f471b6b" path="/var/lib/kubelet/pods/135df6c8-ac95-4184-ab1d-8b185f471b6b/volumes" May 8 05:45:43.913844 kubelet[2596]: I0508 05:45:43.913816 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c40b71-a013-43d8-b8d8-e3eec48008e2" path="/var/lib/kubelet/pods/d3c40b71-a013-43d8-b8d8-e3eec48008e2/volumes" May 8 05:45:43.914627 kubelet[2596]: I0508 05:45:43.914489 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0c66d8-a349-4a7c-a2b1-4bc252479a68" path="/var/lib/kubelet/pods/db0c66d8-a349-4a7c-a2b1-4bc252479a68/volumes" May 8 05:45:44.367054 containerd[1459]: time="2025-05-08T05:45:44.366859630Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" May 8 05:45:44.369773 systemd[1]: cri-containerd-e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd.scope: Deactivated successfully. May 8 05:45:44.402425 containerd[1459]: time="2025-05-08T05:45:44.402237175Z" level=info msg="shim disconnected" id=e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd namespace=k8s.io May 8 05:45:44.402425 containerd[1459]: time="2025-05-08T05:45:44.402287491Z" level=warning msg="cleaning up after shim disconnected" id=e8ecaf38d5119964182312916d0191eabb1010d55687c49126e376a5d4ed2fbd namespace=k8s.io May 8 05:45:44.402425 containerd[1459]: time="2025-05-08T05:45:44.402297982Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:45:44.569088 containerd[1459]: time="2025-05-08T05:45:44.569020094Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 05:45:44.597166 containerd[1459]: time="2025-05-08T05:45:44.597114445Z" level=info msg="CreateContainer within sandbox \"4ee1c59ee705c2b68ad3c81c3e3948e8ed438e855f6dc82295f86f9a8388cfe2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a\"" May 8 05:45:44.598653 containerd[1459]: time="2025-05-08T05:45:44.598632660Z" level=info msg="StartContainer for \"e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a\"" May 8 05:45:44.612348 kubelet[2596]: I0508 05:45:44.612298 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bd9b676f4-hwg9b" podStartSLOduration=3.6122785 podStartE2EDuration="3.6122785s" podCreationTimestamp="2025-05-08 05:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:44.60883061 +0000 UTC m=+76.885431823" watchObservedRunningTime="2025-05-08 05:45:44.6122785 +0000 UTC m=+76.888879743" May 8 05:45:44.647604 systemd[1]: Started cri-containerd-e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a.scope - libcontainer container e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a. May 8 05:45:44.688490 containerd[1459]: time="2025-05-08T05:45:44.688426542Z" level=info msg="StartContainer for \"e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a\" returns successfully" May 8 05:45:44.806750 systemd[1]: Created slice kubepods-besteffort-pode5a19af5_6cd5_427c_9725_39c2355648ac.slice - libcontainer container kubepods-besteffort-pode5a19af5_6cd5_427c_9725_39c2355648ac.slice. May 8 05:45:44.878738 kubelet[2596]: I0508 05:45:44.878612 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62v6s\" (UniqueName: \"kubernetes.io/projected/e5a19af5-6cd5-427c-9725-39c2355648ac-kube-api-access-62v6s\") pod \"calico-kube-controllers-6c46dd9695-v2qbf\" (UID: \"e5a19af5-6cd5-427c-9725-39c2355648ac\") " pod="calico-system/calico-kube-controllers-6c46dd9695-v2qbf" May 8 05:45:44.878738 kubelet[2596]: I0508 05:45:44.878682 2596 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5a19af5-6cd5-427c-9725-39c2355648ac-tigera-ca-bundle\") pod \"calico-kube-controllers-6c46dd9695-v2qbf\" (UID: \"e5a19af5-6cd5-427c-9725-39c2355648ac\") " pod="calico-system/calico-kube-controllers-6c46dd9695-v2qbf" May 8 05:45:45.113084 containerd[1459]: time="2025-05-08T05:45:45.112965693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c46dd9695-v2qbf,Uid:e5a19af5-6cd5-427c-9725-39c2355648ac,Namespace:calico-system,Attempt:0,}" May 8 05:45:45.245326 systemd-networkd[1373]: caliaad661d7553: Link UP May 8 05:45:45.246111 systemd-networkd[1373]: caliaad661d7553: Gained carrier May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.163 [INFO][6417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0 calico-kube-controllers-6c46dd9695- calico-system e5a19af5-6cd5-427c-9725-39c2355648ac 1137 0 2025-05-08 05:45:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c46dd9695 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-fbb7d486d2.novalocal calico-kube-controllers-6c46dd9695-v2qbf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaad661d7553 [] []}} ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.163 [INFO][6417] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.197 [INFO][6429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" HandleID="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.208 [INFO][6429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" HandleID="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003322b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-fbb7d486d2.novalocal", "pod":"calico-kube-controllers-6c46dd9695-v2qbf", "timestamp":"2025-05-08 05:45:45.197606151 +0000 UTC"}, Hostname:"ci-4081-3-3-n-fbb7d486d2.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.208 [INFO][6429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.208 [INFO][6429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.208 [INFO][6429] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-fbb7d486d2.novalocal' May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.210 [INFO][6429] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.215 [INFO][6429] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.220 [INFO][6429] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.222 [INFO][6429] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.224 [INFO][6429] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.224 [INFO][6429] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.226 [INFO][6429] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.231 [INFO][6429] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.240 [INFO][6429] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.137/26] block=192.168.47.128/26 handle="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.240 [INFO][6429] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.137/26] handle="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" host="ci-4081-3-3-n-fbb7d486d2.novalocal" May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.240 [INFO][6429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:45:45.265227 containerd[1459]: 2025-05-08 05:45:45.240 [INFO][6429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.137/26] IPv6=[] ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" HandleID="k8s-pod-network.118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.242 [INFO][6417] cni-plugin/k8s.go 386: Populated endpoint ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0", GenerateName:"calico-kube-controllers-6c46dd9695-", Namespace:"calico-system", SelfLink:"", UID:"e5a19af5-6cd5-427c-9725-39c2355648ac", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c46dd9695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"", Pod:"calico-kube-controllers-6c46dd9695-v2qbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaad661d7553", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.242 [INFO][6417] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.137/32] ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.242 [INFO][6417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaad661d7553 ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.245 [INFO][6417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.245 [INFO][6417] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0", GenerateName:"calico-kube-controllers-6c46dd9695-", Namespace:"calico-system", SelfLink:"", UID:"e5a19af5-6cd5-427c-9725-39c2355648ac", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c46dd9695", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-fbb7d486d2.novalocal", ContainerID:"118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a", Pod:"calico-kube-controllers-6c46dd9695-v2qbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaad661d7553", MAC:"ae:de:ea:5c:9d:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:45:45.265999 containerd[1459]: 2025-05-08 05:45:45.262 [INFO][6417] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a" Namespace="calico-system" Pod="calico-kube-controllers-6c46dd9695-v2qbf" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--6c46dd9695--v2qbf-eth0" May 8 05:45:45.287660 containerd[1459]: time="2025-05-08T05:45:45.287006212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:45:45.287660 containerd[1459]: time="2025-05-08T05:45:45.287070516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:45:45.287660 containerd[1459]: time="2025-05-08T05:45:45.287089773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:45.287660 containerd[1459]: time="2025-05-08T05:45:45.287163935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:45:45.305599 systemd[1]: Started cri-containerd-118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a.scope - libcontainer container 118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a. May 8 05:45:45.344924 containerd[1459]: time="2025-05-08T05:45:45.344872961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c46dd9695-v2qbf,Uid:e5a19af5-6cd5-427c-9725-39c2355648ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a\"" May 8 05:45:45.353689 containerd[1459]: time="2025-05-08T05:45:45.353657549Z" level=info msg="CreateContainer within sandbox \"118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 05:45:45.370167 containerd[1459]: time="2025-05-08T05:45:45.369407594Z" level=info msg="CreateContainer within sandbox \"118ee8d3a09d8883c7d24441346ca1d07298a61ee63806c2a35402617d995c0a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64\"" May 8 05:45:45.370807 containerd[1459]: time="2025-05-08T05:45:45.370641662Z" level=info msg="StartContainer for \"fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64\"" May 8 05:45:45.401598 systemd[1]: Started cri-containerd-fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64.scope - libcontainer container fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64. May 8 05:45:45.454786 containerd[1459]: time="2025-05-08T05:45:45.454747674Z" level=info msg="StartContainer for \"fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64\" returns successfully" May 8 05:45:45.590886 kubelet[2596]: I0508 05:45:45.590265 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-69k2h" podStartSLOduration=4.590234118 podStartE2EDuration="4.590234118s" podCreationTimestamp="2025-05-08 05:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:45.587364851 +0000 UTC m=+77.863966115" watchObservedRunningTime="2025-05-08 05:45:45.590234118 +0000 UTC m=+77.866835381" May 8 05:45:45.606673 systemd[1]: run-containerd-runc-k8s.io-fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64-runc.pQqY0o.mount: Deactivated successfully. May 8 05:45:45.629376 kubelet[2596]: I0508 05:45:45.629329 2596 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c46dd9695-v2qbf" podStartSLOduration=3.629310947 podStartE2EDuration="3.629310947s" podCreationTimestamp="2025-05-08 05:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:45:45.628199004 +0000 UTC m=+77.904800227" watchObservedRunningTime="2025-05-08 05:45:45.629310947 +0000 UTC m=+77.905912170" May 8 05:45:46.625802 systemd[1]: run-containerd-runc-k8s.io-e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a-runc.90pVOm.mount: Deactivated successfully. May 8 05:45:47.013761 systemd-networkd[1373]: caliaad661d7553: Gained IPv6LL May 8 05:46:08.518276 kubelet[2596]: I0508 05:46:08.517228 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:46:08.674730 containerd[1459]: time="2025-05-08T05:46:08.674667627Z" level=info msg="StopContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" with timeout 30 (s)" May 8 05:46:08.676155 containerd[1459]: time="2025-05-08T05:46:08.675678161Z" level=info msg="Stop container \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" with signal terminated" May 8 05:46:08.734207 systemd[1]: cri-containerd-81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f.scope: Deactivated successfully. May 8 05:46:08.782610 containerd[1459]: time="2025-05-08T05:46:08.780852591Z" level=info msg="shim disconnected" id=81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f namespace=k8s.io May 8 05:46:08.782610 containerd[1459]: time="2025-05-08T05:46:08.780924318Z" level=warning msg="cleaning up after shim disconnected" id=81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f namespace=k8s.io May 8 05:46:08.782610 containerd[1459]: time="2025-05-08T05:46:08.780940008Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:46:08.784841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f-rootfs.mount: Deactivated successfully. May 8 05:46:08.827619 containerd[1459]: time="2025-05-08T05:46:08.827552623Z" level=info msg="StopContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" returns successfully" May 8 05:46:08.828133 containerd[1459]: time="2025-05-08T05:46:08.828051493Z" level=info msg="StopPodSandbox for \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\"" May 8 05:46:08.828133 containerd[1459]: time="2025-05-08T05:46:08.828125624Z" level=info msg="Container to stop \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 05:46:08.832084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94-shm.mount: Deactivated successfully. May 8 05:46:08.838626 systemd[1]: cri-containerd-e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94.scope: Deactivated successfully. May 8 05:46:08.870898 containerd[1459]: time="2025-05-08T05:46:08.870529126Z" level=info msg="shim disconnected" id=e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94 namespace=k8s.io May 8 05:46:08.870898 containerd[1459]: time="2025-05-08T05:46:08.870603006Z" level=warning msg="cleaning up after shim disconnected" id=e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94 namespace=k8s.io May 8 05:46:08.870898 containerd[1459]: time="2025-05-08T05:46:08.870616682Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:46:08.874621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94-rootfs.mount: Deactivated successfully. May 8 05:46:08.987738 systemd-networkd[1373]: cali4e42d561552: Link DOWN May 8 05:46:08.987754 systemd-networkd[1373]: cali4e42d561552: Lost carrier May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.984 [INFO][6910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.984 [INFO][6910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" iface="eth0" netns="/var/run/netns/cni-5c1362f3-fcf0-f52b-bbc9-20612e00f8b1" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.985 [INFO][6910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" iface="eth0" netns="/var/run/netns/cni-5c1362f3-fcf0-f52b-bbc9-20612e00f8b1" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.998 [INFO][6910] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" after=14.017128ms iface="eth0" netns="/var/run/netns/cni-5c1362f3-fcf0-f52b-bbc9-20612e00f8b1" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.999 [INFO][6910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:08.999 [INFO][6910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.035 [INFO][6920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.035 [INFO][6920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.035 [INFO][6920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.098 [INFO][6920] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.098 [INFO][6920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.100 [INFO][6920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:46:09.105511 containerd[1459]: 2025-05-08 05:46:09.101 [INFO][6910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:09.106283 containerd[1459]: time="2025-05-08T05:46:09.106125880Z" level=info msg="TearDown network for sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" successfully" May 8 05:46:09.106283 containerd[1459]: time="2025-05-08T05:46:09.106167309Z" level=info msg="StopPodSandbox for \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" returns successfully" May 8 05:46:09.107654 systemd[1]: run-netns-cni\x2d5c1362f3\x2dfcf0\x2df52b\x2dbbc9\x2d20612e00f8b1.mount: Deactivated successfully. May 8 05:46:09.272411 kubelet[2596]: I0508 05:46:09.272329 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2af1a327-7716-4aad-bb55-55682f8973c2-calico-apiserver-certs\") pod \"2af1a327-7716-4aad-bb55-55682f8973c2\" (UID: \"2af1a327-7716-4aad-bb55-55682f8973c2\") " May 8 05:46:09.272745 kubelet[2596]: I0508 05:46:09.272505 2596 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qclt\" (UniqueName: \"kubernetes.io/projected/2af1a327-7716-4aad-bb55-55682f8973c2-kube-api-access-5qclt\") pod \"2af1a327-7716-4aad-bb55-55682f8973c2\" (UID: \"2af1a327-7716-4aad-bb55-55682f8973c2\") " May 8 05:46:09.281784 kubelet[2596]: I0508 05:46:09.280372 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af1a327-7716-4aad-bb55-55682f8973c2-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2af1a327-7716-4aad-bb55-55682f8973c2" (UID: "2af1a327-7716-4aad-bb55-55682f8973c2"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 05:46:09.282008 kubelet[2596]: I0508 05:46:09.281896 2596 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af1a327-7716-4aad-bb55-55682f8973c2-kube-api-access-5qclt" (OuterVolumeSpecName: "kube-api-access-5qclt") pod "2af1a327-7716-4aad-bb55-55682f8973c2" (UID: "2af1a327-7716-4aad-bb55-55682f8973c2"). InnerVolumeSpecName "kube-api-access-5qclt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 05:46:09.289481 systemd[1]: var-lib-kubelet-pods-2af1a327\x2d7716\x2d4aad\x2dbb55\x2d55682f8973c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qclt.mount: Deactivated successfully. May 8 05:46:09.289792 systemd[1]: var-lib-kubelet-pods-2af1a327\x2d7716\x2d4aad\x2dbb55\x2d55682f8973c2-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. May 8 05:46:09.374086 kubelet[2596]: I0508 05:46:09.373294 2596 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5qclt\" (UniqueName: \"kubernetes.io/projected/2af1a327-7716-4aad-bb55-55682f8973c2-kube-api-access-5qclt\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:46:09.374086 kubelet[2596]: I0508 05:46:09.373366 2596 reconciler_common.go:288] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2af1a327-7716-4aad-bb55-55682f8973c2-calico-apiserver-certs\") on node \"ci-4081-3-3-n-fbb7d486d2.novalocal\" DevicePath \"\"" May 8 05:46:09.669008 kubelet[2596]: I0508 05:46:09.668917 2596 scope.go:117] "RemoveContainer" containerID="81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f" May 8 05:46:09.674420 containerd[1459]: time="2025-05-08T05:46:09.674374984Z" level=info msg="RemoveContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\"" May 8 05:46:09.681367 systemd[1]: Removed slice kubepods-besteffort-pod2af1a327_7716_4aad_bb55_55682f8973c2.slice - libcontainer container kubepods-besteffort-pod2af1a327_7716_4aad_bb55_55682f8973c2.slice. May 8 05:46:09.685399 containerd[1459]: time="2025-05-08T05:46:09.685366826Z" level=info msg="RemoveContainer for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" returns successfully" May 8 05:46:09.686403 kubelet[2596]: I0508 05:46:09.686062 2596 scope.go:117] "RemoveContainer" containerID="81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f" May 8 05:46:09.687116 containerd[1459]: time="2025-05-08T05:46:09.686514710Z" level=error msg="ContainerStatus for \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\": not found" May 8 05:46:09.687205 kubelet[2596]: E0508 05:46:09.687034 2596 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\": not found" containerID="81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f" May 8 05:46:09.687205 kubelet[2596]: I0508 05:46:09.687075 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f"} err="failed to get container status \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\": rpc error: code = NotFound desc = an error occurred when try to find container \"81a60041ef4399c7356efde0ae96be417dfe941aebf0d14af32dc68360d8565f\": not found" May 8 05:46:09.911535 kubelet[2596]: I0508 05:46:09.911410 2596 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af1a327-7716-4aad-bb55-55682f8973c2" path="/var/lib/kubelet/pods/2af1a327-7716-4aad-bb55-55682f8973c2/volumes" May 8 05:46:12.236146 systemd[1]: run-containerd-runc-k8s.io-e0e59ed5416c981880e38662d9e0efa1aa7144ba6be87aea5ad050f270e47d1a-runc.G0ckzX.mount: Deactivated successfully. May 8 05:46:29.273913 containerd[1459]: time="2025-05-08T05:46:29.272404187Z" level=info msg="StopPodSandbox for \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\"" May 8 05:46:29.273913 containerd[1459]: time="2025-05-08T05:46:29.273018572Z" level=info msg="TearDown network for sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" successfully" May 8 05:46:29.273913 containerd[1459]: time="2025-05-08T05:46:29.273073145Z" level=info msg="StopPodSandbox for \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" returns successfully" May 8 05:46:29.279187 containerd[1459]: time="2025-05-08T05:46:29.275361055Z" level=info msg="RemovePodSandbox for \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\"" May 8 05:46:29.279187 containerd[1459]: time="2025-05-08T05:46:29.275505280Z" level=info msg="Forcibly stopping sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\"" May 8 05:46:29.279187 containerd[1459]: time="2025-05-08T05:46:29.275688628Z" level=info msg="TearDown network for sandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" successfully" May 8 05:46:29.286696 containerd[1459]: time="2025-05-08T05:46:29.286589156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:46:29.286906 containerd[1459]: time="2025-05-08T05:46:29.286788093Z" level=info msg="RemovePodSandbox \"d51d879fae8ad3f57108abc95ec7fa21e3524736deb17c975157d40594cd8848\" returns successfully" May 8 05:46:29.288075 containerd[1459]: time="2025-05-08T05:46:29.287849557Z" level=info msg="StopPodSandbox for \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\"" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.402 [WARNING][7004] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.402 [INFO][7004] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.402 [INFO][7004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" iface="eth0" netns="" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.402 [INFO][7004] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.402 [INFO][7004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.477 [INFO][7011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.478 [INFO][7011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.478 [INFO][7011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.487 [WARNING][7011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.487 [INFO][7011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.489 [INFO][7011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:46:29.492502 containerd[1459]: 2025-05-08 05:46:29.490 [INFO][7004] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.492502 containerd[1459]: time="2025-05-08T05:46:29.492312481Z" level=info msg="TearDown network for sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" successfully" May 8 05:46:29.492502 containerd[1459]: time="2025-05-08T05:46:29.492352507Z" level=info msg="StopPodSandbox for \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" returns successfully" May 8 05:46:29.493694 containerd[1459]: time="2025-05-08T05:46:29.492972984Z" level=info msg="RemovePodSandbox for \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\"" May 8 05:46:29.493694 containerd[1459]: time="2025-05-08T05:46:29.493018921Z" level=info msg="Forcibly stopping sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\"" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.535 [WARNING][7029] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.535 [INFO][7029] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.535 [INFO][7029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" iface="eth0" netns="" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.535 [INFO][7029] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.536 [INFO][7029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.566 [INFO][7036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.567 [INFO][7036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.567 [INFO][7036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.579 [WARNING][7036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.579 [INFO][7036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" HandleID="k8s-pod-network.8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--kube--controllers--787966c4fb--2244q-eth0" May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.581 [INFO][7036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:46:29.584267 containerd[1459]: 2025-05-08 05:46:29.582 [INFO][7029] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718" May 8 05:46:29.586057 containerd[1459]: time="2025-05-08T05:46:29.584583472Z" level=info msg="TearDown network for sandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" successfully" May 8 05:46:29.590916 containerd[1459]: time="2025-05-08T05:46:29.590875028Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:46:29.591218 containerd[1459]: time="2025-05-08T05:46:29.591137585Z" level=info msg="RemovePodSandbox \"8df678f5dc9b91b2e19d15e580aea1bd0decdec2f1b6f708dd00a4f1a1e7f718\" returns successfully" May 8 05:46:29.592300 containerd[1459]: time="2025-05-08T05:46:29.591929718Z" level=info msg="StopPodSandbox for \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\"" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.651 [WARNING][7054] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.652 [INFO][7054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.652 [INFO][7054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" iface="eth0" netns="" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.652 [INFO][7054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.652 [INFO][7054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.692 [INFO][7062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.693 [INFO][7062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.693 [INFO][7062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.703 [WARNING][7062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.703 [INFO][7062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.705 [INFO][7062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:46:29.708514 containerd[1459]: 2025-05-08 05:46:29.706 [INFO][7054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.709623 containerd[1459]: time="2025-05-08T05:46:29.709329040Z" level=info msg="TearDown network for sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" successfully" May 8 05:46:29.709623 containerd[1459]: time="2025-05-08T05:46:29.709369006Z" level=info msg="StopPodSandbox for \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" returns successfully" May 8 05:46:29.710500 containerd[1459]: time="2025-05-08T05:46:29.710195714Z" level=info msg="RemovePodSandbox for \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\"" May 8 05:46:29.710500 containerd[1459]: time="2025-05-08T05:46:29.710227874Z" level=info msg="Forcibly stopping sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\"" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.761 [WARNING][7080] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" WorkloadEndpoint="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.761 [INFO][7080] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.761 [INFO][7080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" iface="eth0" netns="" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.761 [INFO][7080] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.761 [INFO][7080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.794 [INFO][7087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.795 [INFO][7087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.795 [INFO][7087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.802 [WARNING][7087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.802 [INFO][7087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" HandleID="k8s-pod-network.e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" Workload="ci--4081--3--3--n--fbb7d486d2.novalocal-k8s-calico--apiserver--67fd4c9f8d--ncnc7-eth0" May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.804 [INFO][7087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:46:29.807502 containerd[1459]: 2025-05-08 05:46:29.805 [INFO][7080] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94" May 8 05:46:29.807502 containerd[1459]: time="2025-05-08T05:46:29.806967033Z" level=info msg="TearDown network for sandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" successfully" May 8 05:46:29.812566 containerd[1459]: time="2025-05-08T05:46:29.812288497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:46:29.812566 containerd[1459]: time="2025-05-08T05:46:29.812383208Z" level=info msg="RemovePodSandbox \"e5b47739d4a4bb4c12e16cd5d2da8dd1e5a212580273a67b23254467f35bcd94\" returns successfully" May 8 05:46:29.813302 containerd[1459]: time="2025-05-08T05:46:29.813240573Z" level=info msg="StopPodSandbox for \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\"" May 8 05:46:29.813422 containerd[1459]: time="2025-05-08T05:46:29.813399224Z" level=info msg="TearDown network for sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" successfully" May 8 05:46:29.813758 containerd[1459]: time="2025-05-08T05:46:29.813419072Z" level=info msg="StopPodSandbox for \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" returns successfully" May 8 05:46:29.814522 containerd[1459]: time="2025-05-08T05:46:29.813989795Z" level=info msg="RemovePodSandbox for \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\"" May 8 05:46:29.814522 containerd[1459]: time="2025-05-08T05:46:29.814037305Z" level=info msg="Forcibly stopping sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\"" May 8 05:46:29.814522 containerd[1459]: time="2025-05-08T05:46:29.814118068Z" level=info msg="TearDown network for sandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" successfully" May 8 05:46:29.819223 containerd[1459]: time="2025-05-08T05:46:29.819164431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:46:29.819422 containerd[1459]: time="2025-05-08T05:46:29.819398485Z" level=info msg="RemovePodSandbox \"5f401385e50db3ba161d26f2050e1360942d1b4e5135418d02d17ac3adadaf5f\" returns successfully" May 8 05:47:31.230179 update_engine[1451]: I20250508 05:47:31.229817 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 05:47:31.233688 update_engine[1451]: I20250508 05:47:31.231757 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 05:47:31.233688 update_engine[1451]: I20250508 05:47:31.232968 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 05:47:31.238489 update_engine[1451]: I20250508 05:47:31.236650 1451 omaha_request_params.cc:62] Current group set to lts May 8 05:47:31.238489 update_engine[1451]: I20250508 05:47:31.237349 1451 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 05:47:31.238489 update_engine[1451]: I20250508 05:47:31.237394 1451 update_attempter.cc:643] Scheduling an action processor start. May 8 05:47:31.240805 update_engine[1451]: I20250508 05:47:31.239823 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 05:47:31.240805 update_engine[1451]: I20250508 05:47:31.240064 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 05:47:31.240805 update_engine[1451]: I20250508 05:47:31.240242 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 05:47:31.240805 update_engine[1451]: I20250508 05:47:31.240269 1451 omaha_request_action.cc:272] Request: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: May 8 05:47:31.240805 update_engine[1451]: I20250508 05:47:31.240298 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:47:31.249641 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 05:47:31.252480 update_engine[1451]: I20250508 05:47:31.252360 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:47:31.253921 update_engine[1451]: I20250508 05:47:31.253798 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:47:31.267125 update_engine[1451]: E20250508 05:47:31.266992 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:47:31.267331 update_engine[1451]: I20250508 05:47:31.267221 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 05:47:41.219817 update_engine[1451]: I20250508 05:47:41.219597 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:47:41.221020 update_engine[1451]: I20250508 05:47:41.220204 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:47:41.221020 update_engine[1451]: I20250508 05:47:41.220674 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:47:41.231499 update_engine[1451]: E20250508 05:47:41.231312 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:47:41.231850 update_engine[1451]: I20250508 05:47:41.231530 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 05:47:51.225133 update_engine[1451]: I20250508 05:47:51.224325 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:47:51.229095 update_engine[1451]: I20250508 05:47:51.226022 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:47:51.229095 update_engine[1451]: I20250508 05:47:51.227211 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:47:51.239053 update_engine[1451]: E20250508 05:47:51.238926 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:47:51.239260 update_engine[1451]: I20250508 05:47:51.239128 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 05:48:01.220783 update_engine[1451]: I20250508 05:48:01.220366 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:48:01.221889 update_engine[1451]: I20250508 05:48:01.221077 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:48:01.221889 update_engine[1451]: I20250508 05:48:01.221732 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:48:01.232045 update_engine[1451]: E20250508 05:48:01.231952 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:48:01.232391 update_engine[1451]: I20250508 05:48:01.232074 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 05:48:01.232391 update_engine[1451]: I20250508 05:48:01.232129 1451 omaha_request_action.cc:617] Omaha request response: May 8 05:48:01.232923 update_engine[1451]: E20250508 05:48:01.232686 1451 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 05:48:01.233316 update_engine[1451]: I20250508 05:48:01.233237 1451 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 05:48:01.233316 update_engine[1451]: I20250508 05:48:01.233274 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:48:01.233316 update_engine[1451]: I20250508 05:48:01.233289 1451 update_attempter.cc:306] Processing Done. May 8 05:48:01.233833 update_engine[1451]: E20250508 05:48:01.233379 1451 update_attempter.cc:619] Update failed. May 8 05:48:01.233833 update_engine[1451]: I20250508 05:48:01.233412 1451 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 05:48:01.233833 update_engine[1451]: I20250508 05:48:01.233425 1451 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 05:48:01.233833 update_engine[1451]: I20250508 05:48:01.233563 1451 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 05:48:01.234198 update_engine[1451]: I20250508 05:48:01.234169 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 05:48:01.236577 update_engine[1451]: I20250508 05:48:01.234301 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 05:48:01.236577 update_engine[1451]: I20250508 05:48:01.234329 1451 omaha_request_action.cc:272] Request: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: May 8 05:48:01.236577 update_engine[1451]: I20250508 05:48:01.234344 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:48:01.248785 update_engine[1451]: I20250508 05:48:01.247898 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:48:01.248785 update_engine[1451]: I20250508 05:48:01.248391 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:48:01.251633 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 05:48:01.259203 update_engine[1451]: E20250508 05:48:01.259080 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:48:01.259203 update_engine[1451]: I20250508 05:48:01.259199 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259223 1451 omaha_request_action.cc:617] Omaha request response: May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259241 1451 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259254 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259267 1451 update_attempter.cc:306] Processing Done. May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259281 1451 update_attempter.cc:310] Error event sent. May 8 05:48:01.260835 update_engine[1451]: I20250508 05:48:01.259318 1451 update_check_scheduler.cc:74] Next update check in 40m59s May 8 05:48:01.261510 locksmithd[1474]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 05:48:15.193540 systemd[1]: run-containerd-runc-k8s.io-fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64-runc.CaTGlj.mount: Deactivated successfully. May 8 05:48:18.632605 systemd[1]: Started sshd@9-172.24.4.135:22-172.24.4.1:50762.service - OpenSSH per-connection server daemon (172.24.4.1:50762). May 8 05:48:19.699486 sshd[7339]: Accepted publickey for core from 172.24.4.1 port 50762 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:19.705415 sshd[7339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:19.716332 systemd-logind[1450]: New session 12 of user core. May 8 05:48:19.721811 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 05:48:20.581905 sshd[7339]: pam_unix(sshd:session): session closed for user core May 8 05:48:20.588315 systemd[1]: sshd@9-172.24.4.135:22-172.24.4.1:50762.service: Deactivated successfully. May 8 05:48:20.594762 systemd[1]: session-12.scope: Deactivated successfully. May 8 05:48:20.596959 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. May 8 05:48:20.599010 systemd-logind[1450]: Removed session 12. May 8 05:48:25.615494 systemd[1]: Started sshd@10-172.24.4.135:22-172.24.4.1:52582.service - OpenSSH per-connection server daemon (172.24.4.1:52582). May 8 05:48:26.900517 sshd[7363]: Accepted publickey for core from 172.24.4.1 port 52582 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:26.906277 sshd[7363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:26.943546 systemd-logind[1450]: New session 13 of user core. May 8 05:48:26.950690 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 05:48:27.637926 sshd[7363]: pam_unix(sshd:session): session closed for user core May 8 05:48:27.653697 systemd[1]: sshd@10-172.24.4.135:22-172.24.4.1:52582.service: Deactivated successfully. May 8 05:48:27.661238 systemd[1]: session-13.scope: Deactivated successfully. May 8 05:48:27.666268 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. May 8 05:48:27.669320 systemd-logind[1450]: Removed session 13. May 8 05:48:32.664037 systemd[1]: Started sshd@11-172.24.4.135:22-172.24.4.1:52592.service - OpenSSH per-connection server daemon (172.24.4.1:52592). May 8 05:48:34.032299 sshd[7387]: Accepted publickey for core from 172.24.4.1 port 52592 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:34.037083 sshd[7387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:34.053965 systemd-logind[1450]: New session 14 of user core. May 8 05:48:34.061786 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 05:48:34.821887 sshd[7387]: pam_unix(sshd:session): session closed for user core May 8 05:48:34.833674 systemd[1]: sshd@11-172.24.4.135:22-172.24.4.1:52592.service: Deactivated successfully. May 8 05:48:34.838250 systemd[1]: session-14.scope: Deactivated successfully. May 8 05:48:34.843655 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. May 8 05:48:34.852580 systemd[1]: Started sshd@12-172.24.4.135:22-172.24.4.1:46614.service - OpenSSH per-connection server daemon (172.24.4.1:46614). May 8 05:48:34.859611 systemd-logind[1450]: Removed session 14. May 8 05:48:36.087776 sshd[7403]: Accepted publickey for core from 172.24.4.1 port 46614 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:36.094193 sshd[7403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:36.116634 systemd-logind[1450]: New session 15 of user core. May 8 05:48:36.124838 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 05:48:36.885930 sshd[7403]: pam_unix(sshd:session): session closed for user core May 8 05:48:36.894280 systemd[1]: sshd@12-172.24.4.135:22-172.24.4.1:46614.service: Deactivated successfully. May 8 05:48:36.896900 systemd[1]: session-15.scope: Deactivated successfully. May 8 05:48:36.898618 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. May 8 05:48:36.905769 systemd[1]: Started sshd@13-172.24.4.135:22-172.24.4.1:46624.service - OpenSSH per-connection server daemon (172.24.4.1:46624). May 8 05:48:36.910182 systemd-logind[1450]: Removed session 15. May 8 05:48:38.131618 sshd[7413]: Accepted publickey for core from 172.24.4.1 port 46624 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:38.134568 sshd[7413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:38.145561 systemd-logind[1450]: New session 16 of user core. May 8 05:48:38.152648 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 05:48:38.873605 sshd[7413]: pam_unix(sshd:session): session closed for user core May 8 05:48:38.883018 systemd[1]: sshd@13-172.24.4.135:22-172.24.4.1:46624.service: Deactivated successfully. May 8 05:48:38.889514 systemd[1]: session-16.scope: Deactivated successfully. May 8 05:48:38.892049 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. May 8 05:48:38.894358 systemd-logind[1450]: Removed session 16. May 8 05:48:43.911171 systemd[1]: Started sshd@14-172.24.4.135:22-172.24.4.1:36370.service - OpenSSH per-connection server daemon (172.24.4.1:36370). May 8 05:48:45.196518 systemd[1]: run-containerd-runc-k8s.io-fef472dc1815fd95b46ef8ac0151819cf169c87a0477a8cb234b7961385fdd64-runc.XGpZrD.mount: Deactivated successfully. May 8 05:48:45.273918 sshd[7448]: Accepted publickey for core from 172.24.4.1 port 36370 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:45.275458 sshd[7448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:45.284102 systemd-logind[1450]: New session 17 of user core. May 8 05:48:45.290172 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 05:48:45.914269 sshd[7448]: pam_unix(sshd:session): session closed for user core May 8 05:48:45.926189 systemd[1]: sshd@14-172.24.4.135:22-172.24.4.1:36370.service: Deactivated successfully. May 8 05:48:45.932116 systemd[1]: session-17.scope: Deactivated successfully. May 8 05:48:45.936737 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. May 8 05:48:45.940695 systemd-logind[1450]: Removed session 17. May 8 05:48:50.931308 systemd[1]: Started sshd@15-172.24.4.135:22-172.24.4.1:36384.service - OpenSSH per-connection server daemon (172.24.4.1:36384). May 8 05:48:52.222646 sshd[7506]: Accepted publickey for core from 172.24.4.1 port 36384 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:52.225038 sshd[7506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:52.236586 systemd-logind[1450]: New session 18 of user core. May 8 05:48:52.250799 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 05:48:53.120821 sshd[7506]: pam_unix(sshd:session): session closed for user core May 8 05:48:53.125421 systemd[1]: sshd@15-172.24.4.135:22-172.24.4.1:36384.service: Deactivated successfully. May 8 05:48:53.130176 systemd[1]: session-18.scope: Deactivated successfully. May 8 05:48:53.132034 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. May 8 05:48:53.133609 systemd-logind[1450]: Removed session 18. May 8 05:48:58.151295 systemd[1]: Started sshd@16-172.24.4.135:22-172.24.4.1:42314.service - OpenSSH per-connection server daemon (172.24.4.1:42314). May 8 05:48:59.418067 sshd[7530]: Accepted publickey for core from 172.24.4.1 port 42314 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:48:59.423755 sshd[7530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:48:59.441980 systemd-logind[1450]: New session 19 of user core. May 8 05:48:59.454880 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 05:49:00.168226 sshd[7530]: pam_unix(sshd:session): session closed for user core May 8 05:49:00.178057 systemd[1]: sshd@16-172.24.4.135:22-172.24.4.1:42314.service: Deactivated successfully. May 8 05:49:00.190221 systemd[1]: session-19.scope: Deactivated successfully. May 8 05:49:00.193571 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. May 8 05:49:00.196397 systemd-logind[1450]: Removed session 19. May 8 05:49:05.204087 systemd[1]: Started sshd@17-172.24.4.135:22-172.24.4.1:52402.service - OpenSSH per-connection server daemon (172.24.4.1:52402). May 8 05:49:06.398253 sshd[7550]: Accepted publickey for core from 172.24.4.1 port 52402 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:06.402875 sshd[7550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:06.418381 systemd-logind[1450]: New session 20 of user core. May 8 05:49:06.429850 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 05:49:07.371767 sshd[7550]: pam_unix(sshd:session): session closed for user core May 8 05:49:07.386487 systemd[1]: sshd@17-172.24.4.135:22-172.24.4.1:52402.service: Deactivated successfully. May 8 05:49:07.393171 systemd[1]: session-20.scope: Deactivated successfully. May 8 05:49:07.395999 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. May 8 05:49:07.410680 systemd[1]: Started sshd@18-172.24.4.135:22-172.24.4.1:52408.service - OpenSSH per-connection server daemon (172.24.4.1:52408). May 8 05:49:07.414969 systemd-logind[1450]: Removed session 20. May 8 05:49:08.644563 sshd[7562]: Accepted publickey for core from 172.24.4.1 port 52408 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:08.649158 sshd[7562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:08.670933 systemd-logind[1450]: New session 21 of user core. May 8 05:49:08.678401 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 05:49:09.646584 sshd[7562]: pam_unix(sshd:session): session closed for user core May 8 05:49:09.660107 systemd[1]: sshd@18-172.24.4.135:22-172.24.4.1:52408.service: Deactivated successfully. May 8 05:49:09.668137 systemd[1]: session-21.scope: Deactivated successfully. May 8 05:49:09.674983 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. May 8 05:49:09.693971 systemd[1]: Started sshd@19-172.24.4.135:22-172.24.4.1:52412.service - OpenSSH per-connection server daemon (172.24.4.1:52412). May 8 05:49:09.699371 systemd-logind[1450]: Removed session 21. May 8 05:49:10.852722 sshd[7573]: Accepted publickey for core from 172.24.4.1 port 52412 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:10.856823 sshd[7573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:10.889711 systemd-logind[1450]: New session 22 of user core. May 8 05:49:10.901275 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 05:49:14.815751 sshd[7573]: pam_unix(sshd:session): session closed for user core May 8 05:49:14.841803 systemd[1]: sshd@19-172.24.4.135:22-172.24.4.1:52412.service: Deactivated successfully. May 8 05:49:14.853948 systemd[1]: session-22.scope: Deactivated successfully. May 8 05:49:14.855159 systemd[1]: session-22.scope: Consumed 1.095s CPU time. May 8 05:49:14.869133 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. May 8 05:49:14.883089 systemd[1]: Started sshd@20-172.24.4.135:22-172.24.4.1:53888.service - OpenSSH per-connection server daemon (172.24.4.1:53888). May 8 05:49:14.887726 systemd-logind[1450]: Removed session 22. May 8 05:49:16.027358 sshd[7613]: Accepted publickey for core from 172.24.4.1 port 53888 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:16.033036 sshd[7613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:16.047175 systemd-logind[1450]: New session 23 of user core. May 8 05:49:16.060983 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 05:49:17.017570 sshd[7613]: pam_unix(sshd:session): session closed for user core May 8 05:49:17.030003 systemd[1]: sshd@20-172.24.4.135:22-172.24.4.1:53888.service: Deactivated successfully. May 8 05:49:17.035136 systemd[1]: session-23.scope: Deactivated successfully. May 8 05:49:17.038396 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. May 8 05:49:17.048226 systemd[1]: Started sshd@21-172.24.4.135:22-172.24.4.1:53900.service - OpenSSH per-connection server daemon (172.24.4.1:53900). May 8 05:49:17.052629 systemd-logind[1450]: Removed session 23. May 8 05:49:18.313698 sshd[7642]: Accepted publickey for core from 172.24.4.1 port 53900 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:18.316042 sshd[7642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:18.324164 systemd-logind[1450]: New session 24 of user core. May 8 05:49:18.329678 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 05:49:19.121920 sshd[7642]: pam_unix(sshd:session): session closed for user core May 8 05:49:19.129851 systemd[1]: sshd@21-172.24.4.135:22-172.24.4.1:53900.service: Deactivated successfully. May 8 05:49:19.135776 systemd[1]: session-24.scope: Deactivated successfully. May 8 05:49:19.139125 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. May 8 05:49:19.146954 systemd-logind[1450]: Removed session 24. May 8 05:49:24.154665 systemd[1]: Started sshd@22-172.24.4.135:22-172.24.4.1:54982.service - OpenSSH per-connection server daemon (172.24.4.1:54982). May 8 05:49:25.375524 sshd[7658]: Accepted publickey for core from 172.24.4.1 port 54982 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:25.382772 sshd[7658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:25.405982 systemd-logind[1450]: New session 25 of user core. May 8 05:49:25.417963 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 05:49:26.118918 sshd[7658]: pam_unix(sshd:session): session closed for user core May 8 05:49:26.127673 systemd[1]: sshd@22-172.24.4.135:22-172.24.4.1:54982.service: Deactivated successfully. May 8 05:49:26.136416 systemd[1]: session-25.scope: Deactivated successfully. May 8 05:49:26.141635 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. May 8 05:49:26.144340 systemd-logind[1450]: Removed session 25. May 8 05:49:31.142110 systemd[1]: Started sshd@23-172.24.4.135:22-172.24.4.1:54984.service - OpenSSH per-connection server daemon (172.24.4.1:54984). May 8 05:49:32.409537 sshd[7673]: Accepted publickey for core from 172.24.4.1 port 54984 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:32.412315 sshd[7673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:32.424286 systemd-logind[1450]: New session 26 of user core. May 8 05:49:32.428830 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 05:49:33.386093 sshd[7673]: pam_unix(sshd:session): session closed for user core May 8 05:49:33.395242 systemd[1]: sshd@23-172.24.4.135:22-172.24.4.1:54984.service: Deactivated successfully. May 8 05:49:33.402952 systemd[1]: session-26.scope: Deactivated successfully. May 8 05:49:33.405602 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. May 8 05:49:33.409662 systemd-logind[1450]: Removed session 26. May 8 05:49:38.408155 systemd[1]: Started sshd@24-172.24.4.135:22-172.24.4.1:37674.service - OpenSSH per-connection server daemon (172.24.4.1:37674). May 8 05:49:39.585333 sshd[7688]: Accepted publickey for core from 172.24.4.1 port 37674 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:39.587997 sshd[7688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:39.595730 systemd-logind[1450]: New session 27 of user core. May 8 05:49:39.603251 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 05:49:40.386946 sshd[7688]: pam_unix(sshd:session): session closed for user core May 8 05:49:40.396067 systemd[1]: sshd@24-172.24.4.135:22-172.24.4.1:37674.service: Deactivated successfully. May 8 05:49:40.402055 systemd[1]: session-27.scope: Deactivated successfully. May 8 05:49:40.404292 systemd-logind[1450]: Session 27 logged out. Waiting for processes to exit. May 8 05:49:40.407059 systemd-logind[1450]: Removed session 27. May 8 05:49:45.414120 systemd[1]: Started sshd@25-172.24.4.135:22-172.24.4.1:37540.service - OpenSSH per-connection server daemon (172.24.4.1:37540). May 8 05:49:46.586555 sshd[7762]: Accepted publickey for core from 172.24.4.1 port 37540 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:46.589433 sshd[7762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:46.603981 systemd-logind[1450]: New session 28 of user core. May 8 05:49:46.611829 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 05:49:47.335713 sshd[7762]: pam_unix(sshd:session): session closed for user core May 8 05:49:47.341340 systemd[1]: sshd@25-172.24.4.135:22-172.24.4.1:37540.service: Deactivated successfully. May 8 05:49:47.346408 systemd[1]: session-28.scope: Deactivated successfully. May 8 05:49:47.350232 systemd-logind[1450]: Session 28 logged out. Waiting for processes to exit. May 8 05:49:47.353572 systemd-logind[1450]: Removed session 28. May 8 05:49:52.358079 systemd[1]: Started sshd@26-172.24.4.135:22-172.24.4.1:37546.service - OpenSSH per-connection server daemon (172.24.4.1:37546). May 8 05:49:53.560959 sshd[7775]: Accepted publickey for core from 172.24.4.1 port 37546 ssh2: RSA SHA256:JUgleLtHX7Q1592ztcFI0huk0nUf+CvuYz/+IShjAoQ May 8 05:49:53.564495 sshd[7775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:49:53.577251 systemd-logind[1450]: New session 29 of user core. May 8 05:49:53.587882 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 05:49:54.313715 sshd[7775]: pam_unix(sshd:session): session closed for user core May 8 05:49:54.320858 systemd-logind[1450]: Session 29 logged out. Waiting for processes to exit. May 8 05:49:54.321758 systemd[1]: sshd@26-172.24.4.135:22-172.24.4.1:37546.service: Deactivated successfully. May 8 05:49:54.328180 systemd[1]: session-29.scope: Deactivated successfully. May 8 05:49:54.334107 systemd-logind[1450]: Removed session 29.