May 13 04:20:06.038026 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon May 12 22:46:21 -00 2025 May 13 04:20:06.038054 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:20:06.038064 kernel: BIOS-provided physical RAM map: May 13 04:20:06.038071 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 04:20:06.038078 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 04:20:06.038088 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 04:20:06.038111 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 04:20:06.038120 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 04:20:06.038127 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 04:20:06.038134 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 04:20:06.038142 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 04:20:06.038149 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 04:20:06.038157 kernel: NX (Execute Disable) protection: active May 13 04:20:06.038164 kernel: APIC: Static calls initialized May 13 04:20:06.038176 kernel: SMBIOS 3.0.0 present. May 13 04:20:06.038183 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 04:20:06.038191 kernel: Hypervisor detected: KVM May 13 04:20:06.038198 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 04:20:06.038206 kernel: kvm-clock: using sched offset of 3407724034 cycles May 13 04:20:06.038216 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 04:20:06.038224 kernel: tsc: Detected 1996.249 MHz processor May 13 04:20:06.038232 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 04:20:06.038240 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 04:20:06.038248 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 04:20:06.038256 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 04:20:06.038264 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 04:20:06.038272 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 04:20:06.038280 kernel: ACPI: Early table checksum verification disabled May 13 04:20:06.038289 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 04:20:06.038297 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:20:06.038305 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:20:06.038313 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:20:06.038321 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 04:20:06.038328 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:20:06.038336 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 04:20:06.038344 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 04:20:06.038352 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 04:20:06.038361 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 04:20:06.038369 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 04:20:06.038377 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 04:20:06.038388 kernel: No NUMA configuration found May 13 04:20:06.038396 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 04:20:06.038404 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 04:20:06.038414 kernel: Zone ranges: May 13 04:20:06.038422 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 04:20:06.038430 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 04:20:06.038438 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 04:20:06.038446 kernel: Movable zone start for each node May 13 04:20:06.038454 kernel: Early memory node ranges May 13 04:20:06.038462 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 04:20:06.038470 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 04:20:06.038480 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 04:20:06.038489 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 04:20:06.038497 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 04:20:06.038505 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 04:20:06.038513 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 04:20:06.038521 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 04:20:06.038529 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 04:20:06.038537 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 04:20:06.038546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 04:20:06.038555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 04:20:06.038564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 04:20:06.038572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 04:20:06.038580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 04:20:06.038588 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 04:20:06.038596 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 04:20:06.038604 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 04:20:06.038612 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 04:20:06.038620 kernel: Booting paravirtualized kernel on KVM May 13 04:20:06.038630 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 04:20:06.038638 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 04:20:06.038646 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 04:20:06.038655 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 04:20:06.038662 kernel: pcpu-alloc: [0] 0 1 May 13 04:20:06.038670 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 04:20:06.038680 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:20:06.038689 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 04:20:06.038699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 04:20:06.038707 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 04:20:06.038715 kernel: Fallback order for Node 0: 0 May 13 04:20:06.038724 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 04:20:06.038732 kernel: Policy zone: Normal May 13 04:20:06.038740 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 04:20:06.038748 kernel: software IO TLB: area num 2. May 13 04:20:06.038756 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 227308K reserved, 0K cma-reserved) May 13 04:20:06.038765 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 04:20:06.038774 kernel: ftrace: allocating 37944 entries in 149 pages May 13 04:20:06.038783 kernel: ftrace: allocated 149 pages with 4 groups May 13 04:20:06.038791 kernel: Dynamic Preempt: voluntary May 13 04:20:06.038799 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 04:20:06.038808 kernel: rcu: RCU event tracing is enabled. May 13 04:20:06.038816 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 04:20:06.038824 kernel: Trampoline variant of Tasks RCU enabled. May 13 04:20:06.038832 kernel: Rude variant of Tasks RCU enabled. May 13 04:20:06.038840 kernel: Tracing variant of Tasks RCU enabled. May 13 04:20:06.038850 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 04:20:06.038859 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 04:20:06.038867 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 04:20:06.038875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 04:20:06.038883 kernel: Console: colour VGA+ 80x25 May 13 04:20:06.038891 kernel: printk: console [tty0] enabled May 13 04:20:06.038899 kernel: printk: console [ttyS0] enabled May 13 04:20:06.038907 kernel: ACPI: Core revision 20230628 May 13 04:20:06.038916 kernel: APIC: Switch to symmetric I/O mode setup May 13 04:20:06.038924 kernel: x2apic enabled May 13 04:20:06.038933 kernel: APIC: Switched APIC routing to: physical x2apic May 13 04:20:06.038942 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 04:20:06.038950 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 04:20:06.038958 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 04:20:06.038966 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 04:20:06.038974 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 04:20:06.038983 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 04:20:06.038991 kernel: Spectre V2 : Mitigation: Retpolines May 13 04:20:06.038999 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 04:20:06.039009 kernel: Speculative Store Bypass: Vulnerable May 13 04:20:06.039017 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 04:20:06.039025 kernel: Freeing SMP alternatives memory: 32K May 13 04:20:06.039034 kernel: pid_max: default: 32768 minimum: 301 May 13 04:20:06.039048 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 04:20:06.039058 kernel: landlock: Up and running. May 13 04:20:06.039067 kernel: SELinux: Initializing. May 13 04:20:06.039075 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 04:20:06.039084 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 04:20:06.041462 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 04:20:06.041481 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:20:06.041496 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:20:06.041505 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 04:20:06.041514 kernel: Performance Events: AMD PMU driver. May 13 04:20:06.041522 kernel: ... version: 0 May 13 04:20:06.041531 kernel: ... bit width: 48 May 13 04:20:06.041541 kernel: ... generic registers: 4 May 13 04:20:06.041550 kernel: ... value mask: 0000ffffffffffff May 13 04:20:06.041559 kernel: ... max period: 00007fffffffffff May 13 04:20:06.041567 kernel: ... fixed-purpose events: 0 May 13 04:20:06.041576 kernel: ... event mask: 000000000000000f May 13 04:20:06.041584 kernel: signal: max sigframe size: 1440 May 13 04:20:06.041593 kernel: rcu: Hierarchical SRCU implementation. May 13 04:20:06.041602 kernel: rcu: Max phase no-delay instances is 400. May 13 04:20:06.041611 kernel: smp: Bringing up secondary CPUs ... May 13 04:20:06.041621 kernel: smpboot: x86: Booting SMP configuration: May 13 04:20:06.041629 kernel: .... node #0, CPUs: #1 May 13 04:20:06.041638 kernel: smp: Brought up 1 node, 2 CPUs May 13 04:20:06.041646 kernel: smpboot: Max logical packages: 2 May 13 04:20:06.041655 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 04:20:06.041664 kernel: devtmpfs: initialized May 13 04:20:06.041672 kernel: x86/mm: Memory block size: 128MB May 13 04:20:06.041681 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 04:20:06.041689 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 04:20:06.041698 kernel: pinctrl core: initialized pinctrl subsystem May 13 04:20:06.041708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 04:20:06.041717 kernel: audit: initializing netlink subsys (disabled) May 13 04:20:06.041726 kernel: audit: type=2000 audit(1747110005.564:1): state=initialized audit_enabled=0 res=1 May 13 04:20:06.041734 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 04:20:06.041743 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 04:20:06.041751 kernel: cpuidle: using governor menu May 13 04:20:06.041759 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 04:20:06.041768 kernel: dca service started, version 1.12.1 May 13 04:20:06.041776 kernel: PCI: Using configuration type 1 for base access May 13 04:20:06.041787 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 04:20:06.041795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 04:20:06.041804 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 04:20:06.041812 kernel: ACPI: Added _OSI(Module Device) May 13 04:20:06.041821 kernel: ACPI: Added _OSI(Processor Device) May 13 04:20:06.041830 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 04:20:06.041838 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 04:20:06.041847 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 04:20:06.041856 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 04:20:06.041866 kernel: ACPI: Interpreter enabled May 13 04:20:06.041874 kernel: ACPI: PM: (supports S0 S3 S5) May 13 04:20:06.041883 kernel: ACPI: Using IOAPIC for interrupt routing May 13 04:20:06.041892 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 04:20:06.041900 kernel: PCI: Using E820 reservations for host bridge windows May 13 04:20:06.041909 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 04:20:06.041918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 04:20:06.042057 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 04:20:06.042186 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 04:20:06.042279 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 04:20:06.042294 kernel: acpiphp: Slot [3] registered May 13 04:20:06.042303 kernel: acpiphp: Slot [4] registered May 13 04:20:06.042320 kernel: acpiphp: Slot [5] registered May 13 04:20:06.042328 kernel: acpiphp: Slot [6] registered May 13 04:20:06.042337 kernel: acpiphp: Slot [7] registered May 13 04:20:06.042345 kernel: acpiphp: Slot [8] registered May 13 04:20:06.042358 kernel: acpiphp: Slot [9] registered May 13 04:20:06.042367 kernel: acpiphp: Slot [10] registered May 13 04:20:06.042375 kernel: acpiphp: Slot [11] registered May 13 04:20:06.042384 kernel: acpiphp: Slot [12] registered May 13 04:20:06.042392 kernel: acpiphp: Slot [13] registered May 13 04:20:06.042401 kernel: acpiphp: Slot [14] registered May 13 04:20:06.042409 kernel: acpiphp: Slot [15] registered May 13 04:20:06.042417 kernel: acpiphp: Slot [16] registered May 13 04:20:06.042426 kernel: acpiphp: Slot [17] registered May 13 04:20:06.042436 kernel: acpiphp: Slot [18] registered May 13 04:20:06.042445 kernel: acpiphp: Slot [19] registered May 13 04:20:06.042453 kernel: acpiphp: Slot [20] registered May 13 04:20:06.042461 kernel: acpiphp: Slot [21] registered May 13 04:20:06.042470 kernel: acpiphp: Slot [22] registered May 13 04:20:06.042478 kernel: acpiphp: Slot [23] registered May 13 04:20:06.042487 kernel: acpiphp: Slot [24] registered May 13 04:20:06.042495 kernel: acpiphp: Slot [25] registered May 13 04:20:06.042504 kernel: acpiphp: Slot [26] registered May 13 04:20:06.042512 kernel: acpiphp: Slot [27] registered May 13 04:20:06.042523 kernel: acpiphp: Slot [28] registered May 13 04:20:06.042531 kernel: acpiphp: Slot [29] registered May 13 04:20:06.042540 kernel: acpiphp: Slot [30] registered May 13 04:20:06.042548 kernel: acpiphp: Slot [31] registered May 13 04:20:06.042557 kernel: PCI host bridge to bus 0000:00 May 13 04:20:06.042651 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 04:20:06.042734 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 04:20:06.042814 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 04:20:06.042898 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 04:20:06.042977 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 04:20:06.043056 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 04:20:06.043195 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 04:20:06.043301 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 04:20:06.043422 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 04:20:06.043520 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 04:20:06.043613 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 04:20:06.043708 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 04:20:06.043805 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 04:20:06.043904 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 04:20:06.044011 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 04:20:06.044169 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 04:20:06.044277 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 04:20:06.044377 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 04:20:06.044477 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 04:20:06.044569 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 04:20:06.044660 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 04:20:06.044751 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 04:20:06.044842 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 04:20:06.044944 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 04:20:06.045036 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 04:20:06.045167 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 04:20:06.045277 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 04:20:06.045368 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 04:20:06.045465 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 04:20:06.045562 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 04:20:06.045652 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 04:20:06.045741 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 04:20:06.045840 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 04:20:06.045932 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 04:20:06.046024 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 04:20:06.046381 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 04:20:06.046486 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 04:20:06.046578 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 04:20:06.046668 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 04:20:06.046681 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 04:20:06.046690 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 04:20:06.046699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 04:20:06.046708 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 04:20:06.046716 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 04:20:06.046725 kernel: iommu: Default domain type: Translated May 13 04:20:06.046737 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 04:20:06.046745 kernel: PCI: Using ACPI for IRQ routing May 13 04:20:06.046754 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 04:20:06.046762 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 04:20:06.046771 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 04:20:06.046860 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 04:20:06.046951 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 04:20:06.047041 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 04:20:06.047057 kernel: vgaarb: loaded May 13 04:20:06.047066 kernel: clocksource: Switched to clocksource kvm-clock May 13 04:20:06.047075 kernel: VFS: Disk quotas dquot_6.6.0 May 13 04:20:06.047084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 04:20:06.047268 kernel: pnp: PnP ACPI init May 13 04:20:06.047388 kernel: pnp 00:03: [dma 2] May 13 04:20:06.047403 kernel: pnp: PnP ACPI: found 5 devices May 13 04:20:06.047412 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 04:20:06.047421 kernel: NET: Registered PF_INET protocol family May 13 04:20:06.047434 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 04:20:06.047443 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 04:20:06.047451 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 04:20:06.047460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 04:20:06.047469 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 04:20:06.047477 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 04:20:06.047486 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 04:20:06.047494 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 04:20:06.047505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 04:20:06.047513 kernel: NET: Registered PF_XDP protocol family May 13 04:20:06.047594 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 04:20:06.047673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 04:20:06.047750 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 04:20:06.047829 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 04:20:06.048197 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 04:20:06.048293 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 04:20:06.048385 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 04:20:06.048403 kernel: PCI: CLS 0 bytes, default 64 May 13 04:20:06.048412 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 04:20:06.048421 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 04:20:06.048429 kernel: Initialise system trusted keyrings May 13 04:20:06.048438 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 04:20:06.048447 kernel: Key type asymmetric registered May 13 04:20:06.048455 kernel: Asymmetric key parser 'x509' registered May 13 04:20:06.048464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 04:20:06.048474 kernel: io scheduler mq-deadline registered May 13 04:20:06.048483 kernel: io scheduler kyber registered May 13 04:20:06.048491 kernel: io scheduler bfq registered May 13 04:20:06.048500 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 04:20:06.048509 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 04:20:06.048518 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 04:20:06.048527 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 04:20:06.048535 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 04:20:06.048544 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 04:20:06.048554 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 04:20:06.048563 kernel: random: crng init done May 13 04:20:06.048572 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 04:20:06.048580 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 04:20:06.048589 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 04:20:06.048598 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 04:20:06.048691 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 04:20:06.051133 kernel: rtc_cmos 00:04: registered as rtc0 May 13 04:20:06.051244 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T04:20:05 UTC (1747110005) May 13 04:20:06.051348 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 04:20:06.051362 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 04:20:06.051371 kernel: NET: Registered PF_INET6 protocol family May 13 04:20:06.051381 kernel: Segment Routing with IPv6 May 13 04:20:06.051390 kernel: In-situ OAM (IOAM) with IPv6 May 13 04:20:06.051398 kernel: NET: Registered PF_PACKET protocol family May 13 04:20:06.051407 kernel: Key type dns_resolver registered May 13 04:20:06.051416 kernel: IPI shorthand broadcast: enabled May 13 04:20:06.051428 kernel: sched_clock: Marking stable (947007950, 181091951)->(1161060569, -32960668) May 13 04:20:06.051437 kernel: registered taskstats version 1 May 13 04:20:06.051446 kernel: Loading compiled-in X.509 certificates May 13 04:20:06.051455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: b404fdaaed18d29adfca671c3bbb23eee96fb08f' May 13 04:20:06.051463 kernel: Key type .fscrypt registered May 13 04:20:06.051472 kernel: Key type fscrypt-provisioning registered May 13 04:20:06.051481 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 04:20:06.051490 kernel: ima: Allocated hash algorithm: sha1 May 13 04:20:06.051499 kernel: ima: No architecture policies found May 13 04:20:06.051509 kernel: clk: Disabling unused clocks May 13 04:20:06.051518 kernel: Freeing unused kernel image (initmem) memory: 42864K May 13 04:20:06.051526 kernel: Write protecting the kernel read-only data: 36864k May 13 04:20:06.051535 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 13 04:20:06.051544 kernel: Run /init as init process May 13 04:20:06.051552 kernel: with arguments: May 13 04:20:06.051561 kernel: /init May 13 04:20:06.051569 kernel: with environment: May 13 04:20:06.051578 kernel: HOME=/ May 13 04:20:06.051588 kernel: TERM=linux May 13 04:20:06.051596 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 04:20:06.051607 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 04:20:06.051619 systemd[1]: Detected virtualization kvm. May 13 04:20:06.051629 systemd[1]: Detected architecture x86-64. May 13 04:20:06.051638 systemd[1]: Running in initrd. May 13 04:20:06.051648 systemd[1]: No hostname configured, using default hostname. May 13 04:20:06.051659 systemd[1]: Hostname set to . May 13 04:20:06.051669 systemd[1]: Initializing machine ID from VM UUID. May 13 04:20:06.051678 systemd[1]: Queued start job for default target initrd.target. May 13 04:20:06.051688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:20:06.051697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:20:06.051708 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 04:20:06.051718 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 04:20:06.051736 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 04:20:06.051748 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 04:20:06.051759 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 04:20:06.051770 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 04:20:06.051779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:20:06.051791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 04:20:06.051801 systemd[1]: Reached target paths.target - Path Units. May 13 04:20:06.051811 systemd[1]: Reached target slices.target - Slice Units. May 13 04:20:06.051820 systemd[1]: Reached target swap.target - Swaps. May 13 04:20:06.051830 systemd[1]: Reached target timers.target - Timer Units. May 13 04:20:06.051840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 04:20:06.051850 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 04:20:06.051860 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 04:20:06.051870 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 04:20:06.051881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 04:20:06.051891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 04:20:06.051900 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:20:06.051910 systemd[1]: Reached target sockets.target - Socket Units. May 13 04:20:06.051920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 04:20:06.051930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 04:20:06.051940 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 04:20:06.051950 systemd[1]: Starting systemd-fsck-usr.service... May 13 04:20:06.051959 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 04:20:06.051971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 04:20:06.051981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:20:06.051991 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 04:20:06.052018 systemd-journald[184]: Collecting audit messages is disabled. May 13 04:20:06.052043 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:20:06.052054 systemd[1]: Finished systemd-fsck-usr.service. May 13 04:20:06.052064 systemd-journald[184]: Journal started May 13 04:20:06.052090 systemd-journald[184]: Runtime Journal (/run/log/journal/d41a5c61d1bc4dbb81cd76d7709de479) is 8.0M, max 78.3M, 70.3M free. May 13 04:20:06.067706 systemd-modules-load[185]: Inserted module 'overlay' May 13 04:20:06.105866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 04:20:06.105897 kernel: Bridge firewalling registered May 13 04:20:06.105911 systemd[1]: Started systemd-journald.service - Journal Service. May 13 04:20:06.096001 systemd-modules-load[185]: Inserted module 'br_netfilter' May 13 04:20:06.106691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 04:20:06.107560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:20:06.114255 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:20:06.115716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 04:20:06.119286 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 04:20:06.120322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 04:20:06.137604 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:20:06.142280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 04:20:06.145806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 04:20:06.148264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 04:20:06.149796 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:20:06.154639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 04:20:06.158256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 04:20:06.166949 dracut-cmdline[212]: dracut-dracut-053 May 13 04:20:06.170277 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a30636f72ddb6c7dc7c9bee07b7cf23b403029ba1ff64eed2705530c62c7b592 May 13 04:20:06.182143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:20:06.206251 systemd-resolved[217]: Positive Trust Anchors: May 13 04:20:06.206972 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 04:20:06.207016 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 04:20:06.212750 systemd-resolved[217]: Defaulting to hostname 'linux'. May 13 04:20:06.213582 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 04:20:06.216236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 04:20:06.260126 kernel: SCSI subsystem initialized May 13 04:20:06.273189 kernel: Loading iSCSI transport class v2.0-870. May 13 04:20:06.286172 kernel: iscsi: registered transport (tcp) May 13 04:20:06.309275 kernel: iscsi: registered transport (qla4xxx) May 13 04:20:06.309349 kernel: QLogic iSCSI HBA Driver May 13 04:20:06.363771 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 04:20:06.373389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 04:20:06.423964 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 04:20:06.424073 kernel: device-mapper: uevent: version 1.0.3 May 13 04:20:06.426391 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 04:20:06.486241 kernel: raid6: sse2x4 gen() 13056 MB/s May 13 04:20:06.501197 kernel: raid6: sse2x2 gen() 15199 MB/s May 13 04:20:06.519569 kernel: raid6: sse2x1 gen() 10119 MB/s May 13 04:20:06.519643 kernel: raid6: using algorithm sse2x2 gen() 15199 MB/s May 13 04:20:06.538508 kernel: raid6: .... xor() 9402 MB/s, rmw enabled May 13 04:20:06.538573 kernel: raid6: using ssse3x2 recovery algorithm May 13 04:20:06.561612 kernel: xor: measuring software checksum speed May 13 04:20:06.561687 kernel: prefetch64-sse : 18038 MB/sec May 13 04:20:06.562165 kernel: generic_sse : 15557 MB/sec May 13 04:20:06.563262 kernel: xor: using function: prefetch64-sse (18038 MB/sec) May 13 04:20:06.749165 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 04:20:06.764832 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 04:20:06.771479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:20:06.784369 systemd-udevd[403]: Using default interface naming scheme 'v255'. May 13 04:20:06.788768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:20:06.800376 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 04:20:06.820185 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation May 13 04:20:06.862926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 04:20:06.872360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 04:20:06.915777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:20:06.927458 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 04:20:06.975005 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 04:20:06.976907 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 04:20:06.977471 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:20:06.978034 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 04:20:06.984198 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 04:20:06.986802 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 04:20:07.004962 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 04:20:07.009230 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 04:20:07.022207 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 04:20:07.022252 kernel: GPT:17805311 != 20971519 May 13 04:20:07.023176 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 04:20:07.023362 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 04:20:07.029785 kernel: GPT:17805311 != 20971519 May 13 04:20:07.029802 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 04:20:07.029814 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:20:07.023457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:20:07.031239 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:20:07.031777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 04:20:07.031825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:20:07.035511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:20:07.048458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:20:07.060126 kernel: libata version 3.00 loaded. May 13 04:20:07.069132 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) May 13 04:20:07.072807 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 04:20:07.078110 kernel: scsi host0: ata_piix May 13 04:20:07.083133 kernel: BTRFS: device fsid b9c18834-b687-45d3-9868-9ac29dc7ddd7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (453) May 13 04:20:07.085125 kernel: scsi host1: ata_piix May 13 04:20:07.085257 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 04:20:07.085272 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 04:20:07.087113 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 04:20:07.127906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:20:07.134344 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 04:20:07.139880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 04:20:07.144356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 04:20:07.144928 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 04:20:07.152239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 04:20:07.154656 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 04:20:07.165534 disk-uuid[507]: Primary Header is updated. May 13 04:20:07.165534 disk-uuid[507]: Secondary Entries is updated. May 13 04:20:07.165534 disk-uuid[507]: Secondary Header is updated. May 13 04:20:07.175613 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:20:07.176963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:20:07.183185 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:20:08.197168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 04:20:08.200465 disk-uuid[510]: The operation has completed successfully. May 13 04:20:08.267441 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 04:20:08.267687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 04:20:08.301218 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 04:20:08.319340 sh[529]: Success May 13 04:20:08.344207 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 04:20:08.441394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 04:20:08.444887 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 04:20:08.454294 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 04:20:08.501154 kernel: BTRFS info (device dm-0): first mount of filesystem b9c18834-b687-45d3-9868-9ac29dc7ddd7 May 13 04:20:08.501241 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 04:20:08.501274 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 04:20:08.506851 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 04:20:08.510601 kernel: BTRFS info (device dm-0): using free space tree May 13 04:20:08.531583 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 04:20:08.533848 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 04:20:08.541419 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 04:20:08.548420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 04:20:08.577870 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:20:08.577958 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:20:08.581505 kernel: BTRFS info (device vda6): using free space tree May 13 04:20:08.594196 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:20:08.617214 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 04:20:08.624128 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:20:08.638443 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 04:20:08.647486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 04:20:08.678058 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 04:20:08.684264 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 04:20:08.704065 systemd-networkd[711]: lo: Link UP May 13 04:20:08.704075 systemd-networkd[711]: lo: Gained carrier May 13 04:20:08.705267 systemd-networkd[711]: Enumeration completed May 13 04:20:08.705865 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:20:08.705869 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 04:20:08.705919 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 04:20:08.706938 systemd-networkd[711]: eth0: Link UP May 13 04:20:08.706942 systemd-networkd[711]: eth0: Gained carrier May 13 04:20:08.706949 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:20:08.707794 systemd[1]: Reached target network.target - Network. May 13 04:20:08.717150 systemd-networkd[711]: eth0: DHCPv4 address 172.24.4.57/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 04:20:08.797744 ignition[660]: Ignition 2.19.0 May 13 04:20:08.797764 ignition[660]: Stage: fetch-offline May 13 04:20:08.799594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 04:20:08.797826 ignition[660]: no configs at "/usr/lib/ignition/base.d" May 13 04:20:08.797845 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:08.798000 ignition[660]: parsed url from cmdline: "" May 13 04:20:08.798006 ignition[660]: no config URL provided May 13 04:20:08.798016 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" May 13 04:20:08.798030 ignition[660]: no config at "/usr/lib/ignition/user.ign" May 13 04:20:08.798038 ignition[660]: failed to fetch config: resource requires networking May 13 04:20:08.798396 ignition[660]: Ignition finished successfully May 13 04:20:08.809361 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 04:20:08.836641 ignition[722]: Ignition 2.19.0 May 13 04:20:08.836667 ignition[722]: Stage: fetch May 13 04:20:08.837060 ignition[722]: no configs at "/usr/lib/ignition/base.d" May 13 04:20:08.837086 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:08.837336 ignition[722]: parsed url from cmdline: "" May 13 04:20:08.837345 ignition[722]: no config URL provided May 13 04:20:08.837359 ignition[722]: reading system config file "/usr/lib/ignition/user.ign" May 13 04:20:08.837378 ignition[722]: no config at "/usr/lib/ignition/user.ign" May 13 04:20:08.837498 ignition[722]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 04:20:08.837532 ignition[722]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 04:20:08.837660 ignition[722]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 04:20:09.079209 ignition[722]: GET result: OK May 13 04:20:09.079359 ignition[722]: parsing config with SHA512: 04d03824cbbe123b7e7a7ea36a6683a99d7337687b5926ac7d0607d793797e560173fd255a3d2d034b104c135ccebf4854d488c374a008fd1b0d6363e48346a0 May 13 04:20:09.085866 unknown[722]: fetched base config from "system" May 13 04:20:09.085891 unknown[722]: fetched base config from "system" May 13 04:20:09.085918 unknown[722]: fetched user config from "openstack" May 13 04:20:09.088898 ignition[722]: fetch: fetch complete May 13 04:20:09.091483 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 04:20:09.088909 ignition[722]: fetch: fetch passed May 13 04:20:09.089010 ignition[722]: Ignition finished successfully May 13 04:20:09.106428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 04:20:09.134375 ignition[728]: Ignition 2.19.0 May 13 04:20:09.134401 ignition[728]: Stage: kargs May 13 04:20:09.134803 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 13 04:20:09.134830 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:09.137242 ignition[728]: kargs: kargs passed May 13 04:20:09.138727 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 04:20:09.137339 ignition[728]: Ignition finished successfully May 13 04:20:09.154518 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 04:20:09.174060 ignition[734]: Ignition 2.19.0 May 13 04:20:09.174078 ignition[734]: Stage: disks May 13 04:20:09.174447 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 13 04:20:09.178514 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 04:20:09.174471 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:09.175922 ignition[734]: disks: disks passed May 13 04:20:09.181509 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 04:20:09.175980 ignition[734]: Ignition finished successfully May 13 04:20:09.183612 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 04:20:09.185599 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 04:20:09.187795 systemd[1]: Reached target sysinit.target - System Initialization. May 13 04:20:09.189779 systemd[1]: Reached target basic.target - Basic System. May 13 04:20:09.201368 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 04:20:09.225564 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 13 04:20:09.240363 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 04:20:09.249363 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 04:20:09.402165 kernel: EXT4-fs (vda9): mounted filesystem 422ad498-4f61-405b-9d71-25f19459d196 r/w with ordered data mode. Quota mode: none. May 13 04:20:09.402305 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 04:20:09.403295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 04:20:09.411356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 04:20:09.414765 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 04:20:09.417933 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 04:20:09.424638 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 04:20:09.427446 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) May 13 04:20:09.425408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 04:20:09.425443 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 04:20:09.436171 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:20:09.436189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:20:09.436202 kernel: BTRFS info (device vda6): using free space tree May 13 04:20:09.437066 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 04:20:09.447358 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 04:20:09.448823 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:20:09.452934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 04:20:09.568344 initrd-setup-root[780]: cut: /sysroot/etc/passwd: No such file or directory May 13 04:20:09.577527 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory May 13 04:20:09.585498 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory May 13 04:20:09.591877 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory May 13 04:20:09.677339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 04:20:09.684171 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 04:20:09.685778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 04:20:09.694059 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 04:20:09.698064 kernel: BTRFS info (device vda6): last unmount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:20:09.719428 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 04:20:09.732585 ignition[869]: INFO : Ignition 2.19.0 May 13 04:20:09.732585 ignition[869]: INFO : Stage: mount May 13 04:20:09.733794 ignition[869]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:20:09.733794 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:09.733794 ignition[869]: INFO : mount: mount passed May 13 04:20:09.737487 ignition[869]: INFO : Ignition finished successfully May 13 04:20:09.735321 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 04:20:10.311611 systemd-networkd[711]: eth0: Gained IPv6LL May 13 04:20:16.656538 coreos-metadata[752]: May 13 04:20:16.656 WARN failed to locate config-drive, using the metadata service API instead May 13 04:20:16.697060 coreos-metadata[752]: May 13 04:20:16.696 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 04:20:16.713721 coreos-metadata[752]: May 13 04:20:16.713 INFO Fetch successful May 13 04:20:16.715237 coreos-metadata[752]: May 13 04:20:16.714 INFO wrote hostname ci-4081-3-3-n-3bdfb8ea63.novalocal to /sysroot/etc/hostname May 13 04:20:16.718163 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 04:20:16.718404 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 04:20:16.729300 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 04:20:16.757571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 04:20:16.776177 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (885) May 13 04:20:16.783847 kernel: BTRFS info (device vda6): first mount of filesystem 97fe19c2-c075-4d7e-9417-f9c367b49e5c May 13 04:20:16.783911 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 04:20:16.790552 kernel: BTRFS info (device vda6): using free space tree May 13 04:20:16.800197 kernel: BTRFS info (device vda6): auto enabling async discard May 13 04:20:16.805451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 04:20:16.845127 ignition[903]: INFO : Ignition 2.19.0 May 13 04:20:16.845127 ignition[903]: INFO : Stage: files May 13 04:20:16.848258 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:20:16.848258 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:16.848258 ignition[903]: DEBUG : files: compiled without relabeling support, skipping May 13 04:20:16.848258 ignition[903]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 04:20:16.848258 ignition[903]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 04:20:16.859149 ignition[903]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 04:20:16.859149 ignition[903]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 04:20:16.859149 ignition[903]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 04:20:16.859149 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 04:20:16.859149 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 04:20:16.853064 unknown[903]: wrote ssh authorized keys file for user: core May 13 04:20:16.926400 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 04:20:17.224729 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 04:20:17.237037 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 04:20:18.023926 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 04:20:20.225938 ignition[903]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 04:20:20.225938 ignition[903]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 04:20:20.229744 ignition[903]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 04:20:20.232145 ignition[903]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 04:20:20.232145 ignition[903]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 04:20:20.232145 ignition[903]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 04:20:20.232145 ignition[903]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 04:20:20.232145 ignition[903]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 04:20:20.232145 ignition[903]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 04:20:20.232145 ignition[903]: INFO : files: files passed May 13 04:20:20.232145 ignition[903]: INFO : Ignition finished successfully May 13 04:20:20.231356 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 04:20:20.242455 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 04:20:20.245768 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 04:20:20.247339 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 04:20:20.247419 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 04:20:20.260871 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 04:20:20.260871 initrd-setup-root-after-ignition[931]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 04:20:20.263901 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 04:20:20.266397 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 04:20:20.268977 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 04:20:20.275367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 04:20:20.297814 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 04:20:20.297989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 04:20:20.299908 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 04:20:20.310273 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 04:20:20.312293 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 04:20:20.319414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 04:20:20.348286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 04:20:20.354390 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 04:20:20.393017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 04:20:20.396539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:20:20.398406 systemd[1]: Stopped target timers.target - Timer Units. May 13 04:20:20.401295 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 04:20:20.401610 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 04:20:20.404723 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 04:20:20.406579 systemd[1]: Stopped target basic.target - Basic System. May 13 04:20:20.409565 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 04:20:20.412253 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 04:20:20.414823 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 04:20:20.417802 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 04:20:20.420804 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 04:20:20.423917 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 04:20:20.426775 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 04:20:20.429753 systemd[1]: Stopped target swap.target - Swaps. May 13 04:20:20.432446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 04:20:20.432729 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 04:20:20.435845 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 04:20:20.437842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:20:20.440356 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 04:20:20.441049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:20:20.443411 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 04:20:20.443792 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 04:20:20.447510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 04:20:20.447877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 04:20:20.449745 systemd[1]: ignition-files.service: Deactivated successfully. May 13 04:20:20.450163 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 04:20:20.461634 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 04:20:20.467357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 04:20:20.469474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 04:20:20.470378 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:20:20.473034 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 04:20:20.473665 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 04:20:20.488594 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 04:20:20.489241 ignition[955]: INFO : Ignition 2.19.0 May 13 04:20:20.489241 ignition[955]: INFO : Stage: umount May 13 04:20:20.489241 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 04:20:20.489241 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 04:20:20.492986 ignition[955]: INFO : umount: umount passed May 13 04:20:20.492986 ignition[955]: INFO : Ignition finished successfully May 13 04:20:20.489954 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 04:20:20.494805 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 04:20:20.494906 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 04:20:20.496879 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 04:20:20.496946 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 04:20:20.497685 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 04:20:20.497726 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 04:20:20.499765 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 04:20:20.499802 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 04:20:20.500873 systemd[1]: Stopped target network.target - Network. May 13 04:20:20.502880 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 04:20:20.502924 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 04:20:20.504002 systemd[1]: Stopped target paths.target - Path Units. May 13 04:20:20.505029 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 04:20:20.508146 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:20:20.508715 systemd[1]: Stopped target slices.target - Slice Units. May 13 04:20:20.510408 systemd[1]: Stopped target sockets.target - Socket Units. May 13 04:20:20.511664 systemd[1]: iscsid.socket: Deactivated successfully. May 13 04:20:20.511698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 04:20:20.513900 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 04:20:20.513932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 04:20:20.515011 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 04:20:20.515053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 04:20:20.516709 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 04:20:20.516749 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 04:20:20.517868 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 04:20:20.519293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 04:20:20.521500 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 04:20:20.522849 systemd-networkd[711]: eth0: DHCPv6 lease lost May 13 04:20:20.523859 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 04:20:20.524058 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 04:20:20.525085 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 04:20:20.525191 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 04:20:20.527453 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 04:20:20.527559 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 04:20:20.529716 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 04:20:20.529987 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 04:20:20.530755 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 04:20:20.530797 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 04:20:20.536202 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 04:20:20.538056 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 04:20:20.538130 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 04:20:20.539179 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 04:20:20.539220 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 04:20:20.544142 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 04:20:20.544199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 04:20:20.545367 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 04:20:20.545410 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:20:20.546506 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:20:20.555354 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 04:20:20.556019 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:20:20.557407 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 04:20:20.557544 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 04:20:20.559741 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 04:20:20.559795 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 04:20:20.561087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 04:20:20.561142 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:20:20.562342 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 04:20:20.562386 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 04:20:20.563943 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 04:20:20.563987 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 04:20:20.564966 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 04:20:20.565006 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 04:20:20.578229 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 04:20:20.580147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 04:20:20.580201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:20:20.580754 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 04:20:20.580795 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 04:20:20.581357 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 04:20:20.581397 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:20:20.581969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 04:20:20.582009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:20:20.583542 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 04:20:20.583643 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 04:20:20.585186 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 04:20:20.597426 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 04:20:20.604179 systemd[1]: Switching root. May 13 04:20:20.634787 systemd-journald[184]: Journal stopped May 13 04:20:22.162801 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 13 04:20:22.162859 kernel: SELinux: policy capability network_peer_controls=1 May 13 04:20:22.162878 kernel: SELinux: policy capability open_perms=1 May 13 04:20:22.162890 kernel: SELinux: policy capability extended_socket_class=1 May 13 04:20:22.162905 kernel: SELinux: policy capability always_check_network=0 May 13 04:20:22.162915 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 04:20:22.162927 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 04:20:22.162937 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 04:20:22.162948 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 04:20:22.162959 kernel: audit: type=1403 audit(1747110021.243:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 04:20:22.162973 systemd[1]: Successfully loaded SELinux policy in 60.268ms. May 13 04:20:22.162992 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.095ms. May 13 04:20:22.163010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 04:20:22.163023 systemd[1]: Detected virtualization kvm. May 13 04:20:22.163035 systemd[1]: Detected architecture x86-64. May 13 04:20:22.163048 systemd[1]: Detected first boot. May 13 04:20:22.163060 systemd[1]: Hostname set to . May 13 04:20:22.163072 systemd[1]: Initializing machine ID from VM UUID. May 13 04:20:22.163084 zram_generator::config[998]: No configuration found. May 13 04:20:22.163998 systemd[1]: Populated /etc with preset unit settings. May 13 04:20:22.164015 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 04:20:22.164027 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 04:20:22.164039 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 04:20:22.164052 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 04:20:22.164064 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 04:20:22.164076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 04:20:22.164088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 04:20:22.164163 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 04:20:22.164178 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 04:20:22.164190 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 04:20:22.164202 systemd[1]: Created slice user.slice - User and Session Slice. May 13 04:20:22.164213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 04:20:22.164226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 04:20:22.164238 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 04:20:22.164250 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 04:20:22.164262 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 04:20:22.164277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 04:20:22.164289 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 04:20:22.164300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 04:20:22.164312 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 04:20:22.164324 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 04:20:22.164336 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 04:20:22.164350 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 04:20:22.164362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 04:20:22.164374 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 04:20:22.164385 systemd[1]: Reached target slices.target - Slice Units. May 13 04:20:22.164397 systemd[1]: Reached target swap.target - Swaps. May 13 04:20:22.164409 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 04:20:22.164421 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 04:20:22.164433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 04:20:22.164445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 04:20:22.164458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 04:20:22.164470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 04:20:22.164482 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 04:20:22.164494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 04:20:22.164506 systemd[1]: Mounting media.mount - External Media Directory... May 13 04:20:22.164518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:20:22.164530 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 04:20:22.164542 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 04:20:22.164554 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 04:20:22.164568 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 04:20:22.164580 systemd[1]: Reached target machines.target - Containers. May 13 04:20:22.164592 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 04:20:22.164604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:20:22.164616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 04:20:22.164628 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 04:20:22.164640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:20:22.164652 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 04:20:22.164665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:20:22.164677 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 04:20:22.164689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:20:22.164701 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 04:20:22.164713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 04:20:22.164725 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 04:20:22.164739 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 04:20:22.164750 systemd[1]: Stopped systemd-fsck-usr.service. May 13 04:20:22.164763 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 04:20:22.164777 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 04:20:22.164789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 04:20:22.164800 kernel: fuse: init (API version 7.39) May 13 04:20:22.164811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 04:20:22.164823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 04:20:22.164835 systemd[1]: verity-setup.service: Deactivated successfully. May 13 04:20:22.164847 systemd[1]: Stopped verity-setup.service. May 13 04:20:22.164859 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:20:22.164873 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 04:20:22.164885 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 04:20:22.164896 systemd[1]: Mounted media.mount - External Media Directory. May 13 04:20:22.164908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 04:20:22.164920 kernel: loop: module loaded May 13 04:20:22.164949 systemd-journald[1094]: Collecting audit messages is disabled. May 13 04:20:22.164975 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 04:20:22.164988 systemd-journald[1094]: Journal started May 13 04:20:22.165012 systemd-journald[1094]: Runtime Journal (/run/log/journal/d41a5c61d1bc4dbb81cd76d7709de479) is 8.0M, max 78.3M, 70.3M free. May 13 04:20:21.841594 systemd[1]: Queued start job for default target multi-user.target. May 13 04:20:21.864320 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 04:20:21.864664 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 04:20:22.168538 systemd[1]: Started systemd-journald.service - Journal Service. May 13 04:20:22.169237 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 04:20:22.171070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 04:20:22.171881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 04:20:22.172628 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 04:20:22.172742 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 04:20:22.173478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:20:22.173586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:20:22.174465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:20:22.174580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:20:22.175340 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 04:20:22.175449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 04:20:22.176165 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:20:22.176273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:20:22.176959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 04:20:22.177705 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 04:20:22.178419 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 04:20:22.191730 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 04:20:22.210128 kernel: ACPI: bus type drm_connector registered May 13 04:20:22.211702 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 04:20:22.220935 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 04:20:22.221611 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 04:20:22.221648 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 04:20:22.223345 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 04:20:22.228254 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 04:20:22.232247 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 04:20:22.232896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:20:22.235995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 04:20:22.239360 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 04:20:22.239923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 04:20:22.249246 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 04:20:22.250396 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 04:20:22.253397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 04:20:22.255357 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 04:20:22.260231 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 04:20:22.264485 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 04:20:22.264626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 04:20:22.265368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 04:20:22.265986 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 04:20:22.268696 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 04:20:22.284733 systemd-journald[1094]: Time spent on flushing to /var/log/journal/d41a5c61d1bc4dbb81cd76d7709de479 is 57.190ms for 945 entries. May 13 04:20:22.284733 systemd-journald[1094]: System Journal (/var/log/journal/d41a5c61d1bc4dbb81cd76d7709de479) is 8.0M, max 584.8M, 576.8M free. May 13 04:20:22.379604 systemd-journald[1094]: Received client request to flush runtime journal. May 13 04:20:22.379659 kernel: loop0: detected capacity change from 0 to 140768 May 13 04:20:22.285369 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 04:20:22.291395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 04:20:22.293032 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 04:20:22.304271 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 04:20:22.314904 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 04:20:22.327714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 04:20:22.346330 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 04:20:22.380681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 04:20:22.391941 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. May 13 04:20:22.391960 systemd-tmpfiles[1131]: ACLs are not supported, ignoring. May 13 04:20:22.399557 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 04:20:22.400218 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 04:20:22.401063 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 04:20:22.413316 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 04:20:22.429118 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 04:20:22.458128 kernel: loop1: detected capacity change from 0 to 205544 May 13 04:20:22.473421 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 04:20:22.480447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 04:20:22.503128 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. May 13 04:20:22.503148 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. May 13 04:20:22.509498 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 04:20:22.520120 kernel: loop2: detected capacity change from 0 to 8 May 13 04:20:22.538120 kernel: loop3: detected capacity change from 0 to 142488 May 13 04:20:22.621139 kernel: loop4: detected capacity change from 0 to 140768 May 13 04:20:22.662122 kernel: loop5: detected capacity change from 0 to 205544 May 13 04:20:22.729135 kernel: loop6: detected capacity change from 0 to 8 May 13 04:20:22.735111 kernel: loop7: detected capacity change from 0 to 142488 May 13 04:20:22.787989 (sd-merge)[1159]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 04:20:22.790826 (sd-merge)[1159]: Merged extensions into '/usr'. May 13 04:20:22.802417 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... May 13 04:20:22.802453 systemd[1]: Reloading... May 13 04:20:22.888163 zram_generator::config[1181]: No configuration found. May 13 04:20:23.134580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:20:23.190854 systemd[1]: Reloading finished in 387 ms. May 13 04:20:23.200256 ldconfig[1125]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 04:20:23.216651 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 04:20:23.217532 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 04:20:23.218294 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 04:20:23.229256 systemd[1]: Starting ensure-sysext.service... May 13 04:20:23.230616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 04:20:23.234293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 04:20:23.244460 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... May 13 04:20:23.244475 systemd[1]: Reloading... May 13 04:20:23.257147 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 04:20:23.257481 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 04:20:23.258311 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 04:20:23.258615 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 13 04:20:23.258670 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. May 13 04:20:23.268424 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 13 04:20:23.268438 systemd-tmpfiles[1243]: Skipping /boot May 13 04:20:23.275516 systemd-udevd[1244]: Using default interface naming scheme 'v255'. May 13 04:20:23.285994 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. May 13 04:20:23.286007 systemd-tmpfiles[1243]: Skipping /boot May 13 04:20:23.292133 zram_generator::config[1268]: No configuration found. May 13 04:20:23.438124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1301) May 13 04:20:23.522118 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 04:20:23.529402 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 04:20:23.537659 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 04:20:23.537707 kernel: ACPI: button: Power Button [PWRF] May 13 04:20:23.536932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:20:23.610450 kernel: mousedev: PS/2 mouse device common for all mice May 13 04:20:23.613705 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 04:20:23.613753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 04:20:23.614818 systemd[1]: Reloading finished in 370 ms. May 13 04:20:23.628867 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 04:20:23.631164 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 04:20:23.631529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 04:20:23.637541 kernel: Console: switching to colour dummy device 80x25 May 13 04:20:23.640133 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 04:20:23.640173 kernel: [drm] features: -context_init May 13 04:20:23.641599 kernel: [drm] number of scanouts: 1 May 13 04:20:23.641635 kernel: [drm] number of cap sets: 0 May 13 04:20:23.643579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 04:20:23.644122 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 13 04:20:23.653570 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 13 04:20:23.653654 kernel: Console: switching to colour frame buffer device 160x50 May 13 04:20:23.662126 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 13 04:20:23.681399 systemd[1]: Finished ensure-sysext.service. May 13 04:20:23.694504 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 04:20:23.697332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:20:23.702246 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 04:20:23.705223 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 04:20:23.705426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 04:20:23.707250 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 04:20:23.711652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 04:20:23.717357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 04:20:23.718381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 04:20:23.720268 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 04:20:23.721241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 04:20:23.724270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 04:20:23.726146 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 04:20:23.740926 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 04:20:23.746259 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 04:20:23.750834 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 04:20:23.754817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 04:20:23.757593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 04:20:23.758881 lvm[1364]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 04:20:23.758892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 04:20:23.759640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 04:20:23.759861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 04:20:23.760825 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 04:20:23.760940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 04:20:23.762260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 04:20:23.762384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 04:20:23.772479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 04:20:23.778337 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 04:20:23.789203 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 04:20:23.790799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 04:20:23.792286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 04:20:23.793829 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 04:20:23.802064 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 04:20:23.820074 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 04:20:23.821962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 04:20:23.834263 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 04:20:23.844244 augenrules[1402]: No rules May 13 04:20:23.847381 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 04:20:23.857944 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 04:20:23.862338 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 04:20:23.862600 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 04:20:23.874353 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 04:20:23.893446 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 04:20:23.910389 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 04:20:23.917463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 04:20:23.920952 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 04:20:23.943410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 04:20:23.966668 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 04:20:23.968491 systemd[1]: Reached target time-set.target - System Time Set. May 13 04:20:23.990208 systemd-networkd[1374]: lo: Link UP May 13 04:20:23.990436 systemd-networkd[1374]: lo: Gained carrier May 13 04:20:23.991667 systemd-networkd[1374]: Enumeration completed May 13 04:20:23.991892 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 04:20:23.992386 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:20:23.992393 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 04:20:23.993990 systemd-networkd[1374]: eth0: Link UP May 13 04:20:23.994126 systemd-networkd[1374]: eth0: Gained carrier May 13 04:20:23.994227 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 04:20:24.004321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 04:20:24.007166 systemd-networkd[1374]: eth0: DHCPv4 address 172.24.4.57/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 04:20:24.008061 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 13 04:20:24.012750 systemd-resolved[1376]: Positive Trust Anchors: May 13 04:20:24.012991 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 04:20:24.013089 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 04:20:24.018732 systemd-resolved[1376]: Using system hostname 'ci-4081-3-3-n-3bdfb8ea63.novalocal'. May 13 04:20:24.020235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 04:20:24.022026 systemd[1]: Reached target network.target - Network. May 13 04:20:24.022675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 04:20:24.023565 systemd[1]: Reached target sysinit.target - System Initialization. May 13 04:20:24.024808 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 04:20:24.026256 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 04:20:24.027400 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 04:20:24.028657 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 04:20:24.029800 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 04:20:24.030997 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 04:20:24.031112 systemd[1]: Reached target paths.target - Path Units. May 13 04:20:24.032238 systemd[1]: Reached target timers.target - Timer Units. May 13 04:20:24.035146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 04:20:24.039701 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 04:20:24.046960 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 04:20:24.050114 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 04:20:24.052614 systemd[1]: Reached target sockets.target - Socket Units. May 13 04:20:24.054155 systemd[1]: Reached target basic.target - Basic System. May 13 04:20:24.055201 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 04:20:24.055227 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 04:20:24.066460 systemd[1]: Starting containerd.service - containerd container runtime... May 13 04:20:24.071076 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 04:20:24.079254 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 04:20:24.092215 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 04:20:24.099700 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 04:20:24.102841 jq[1432]: false May 13 04:20:24.102813 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 04:20:24.105301 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 04:20:24.109234 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 04:20:24.114293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 04:20:24.123279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 04:20:24.136341 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 04:20:24.139612 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 04:20:24.144143 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 04:20:24.149302 extend-filesystems[1433]: Found loop4 May 13 04:20:24.149302 extend-filesystems[1433]: Found loop5 May 13 04:20:24.149302 extend-filesystems[1433]: Found loop6 May 13 04:20:24.149302 extend-filesystems[1433]: Found loop7 May 13 04:20:24.149302 extend-filesystems[1433]: Found vda May 13 04:20:24.149302 extend-filesystems[1433]: Found vda1 May 13 04:20:24.149302 extend-filesystems[1433]: Found vda2 May 13 04:20:24.149302 extend-filesystems[1433]: Found vda3 May 13 04:20:24.149302 extend-filesystems[1433]: Found usr May 13 04:20:24.149302 extend-filesystems[1433]: Found vda4 May 13 04:20:24.149302 extend-filesystems[1433]: Found vda6 May 13 04:20:24.149302 extend-filesystems[1433]: Found vda7 May 13 04:20:24.259162 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 04:20:24.259194 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 04:20:24.259209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1313) May 13 04:20:24.145389 systemd[1]: Starting update-engine.service - Update Engine... May 13 04:20:24.196701 dbus-daemon[1429]: [system] SELinux support is enabled May 13 04:20:24.263713 extend-filesystems[1433]: Found vda9 May 13 04:20:24.263713 extend-filesystems[1433]: Checking size of /dev/vda9 May 13 04:20:24.263713 extend-filesystems[1433]: Resized partition /dev/vda9 May 13 04:20:24.157319 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 04:20:24.268807 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) May 13 04:20:24.268807 extend-filesystems[1456]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 04:20:24.268807 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 04:20:24.268807 extend-filesystems[1456]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 04:20:24.172985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 04:20:24.277905 extend-filesystems[1433]: Resized filesystem in /dev/vda9 May 13 04:20:24.280147 update_engine[1442]: I20250513 04:20:24.224457 1442 main.cc:92] Flatcar Update Engine starting May 13 04:20:24.280147 update_engine[1442]: I20250513 04:20:24.252933 1442 update_check_scheduler.cc:74] Next update check in 2m4s May 13 04:20:24.173555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 04:20:24.286428 jq[1443]: true May 13 04:20:24.177454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 04:20:24.179802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 04:20:24.226607 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 04:20:24.233366 systemd[1]: motdgen.service: Deactivated successfully. May 13 04:20:24.233538 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 04:20:24.245376 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 04:20:24.248015 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 04:20:24.285367 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 04:20:24.298951 tar[1453]: linux-amd64/helm May 13 04:20:24.299280 jq[1460]: true May 13 04:20:24.308944 systemd[1]: Started update-engine.service - Update Engine. May 13 04:20:24.311830 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 04:20:24.311866 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 04:20:24.316008 systemd-logind[1439]: New seat seat0. May 13 04:20:24.316113 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 04:20:24.316137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 04:20:24.321782 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) May 13 04:20:24.322476 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 04:20:24.326297 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 04:20:24.329187 systemd[1]: Started systemd-logind.service - User Login Management. May 13 04:20:24.389420 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 13 04:20:24.390760 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 04:20:24.406340 systemd[1]: Starting sshkeys.service... May 13 04:20:24.455210 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 04:20:24.468430 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 04:20:24.534076 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 04:20:24.627967 containerd[1461]: time="2025-05-13T04:20:24.627719432Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 04:20:24.645604 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 04:20:24.658602 containerd[1461]: time="2025-05-13T04:20:24.658562459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.660067 containerd[1461]: time="2025-05-13T04:20:24.660037786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 04:20:24.660151 containerd[1461]: time="2025-05-13T04:20:24.660135609Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 04:20:24.660210 containerd[1461]: time="2025-05-13T04:20:24.660196363Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 04:20:24.660411 containerd[1461]: time="2025-05-13T04:20:24.660392942Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 04:20:24.660477 containerd[1461]: time="2025-05-13T04:20:24.660461851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.660603 containerd[1461]: time="2025-05-13T04:20:24.660583118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:20:24.660667 containerd[1461]: time="2025-05-13T04:20:24.660652889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.660878 containerd[1461]: time="2025-05-13T04:20:24.660858174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:20:24.660943 containerd[1461]: time="2025-05-13T04:20:24.660929398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.661002 containerd[1461]: time="2025-05-13T04:20:24.660987036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:20:24.661053 containerd[1461]: time="2025-05-13T04:20:24.661040266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.661214 containerd[1461]: time="2025-05-13T04:20:24.661196459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.661553 containerd[1461]: time="2025-05-13T04:20:24.661536246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 04:20:24.661766 containerd[1461]: time="2025-05-13T04:20:24.661741982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 04:20:24.661836 containerd[1461]: time="2025-05-13T04:20:24.661821781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 04:20:24.662015 containerd[1461]: time="2025-05-13T04:20:24.661997852Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 04:20:24.662136 containerd[1461]: time="2025-05-13T04:20:24.662119440Z" level=info msg="metadata content store policy set" policy=shared May 13 04:20:24.670268 containerd[1461]: time="2025-05-13T04:20:24.670249618Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 04:20:24.670366 containerd[1461]: time="2025-05-13T04:20:24.670350757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672125847Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672149251Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672166904Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672279825Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672525957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672619262Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672636454Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672649669Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672663766Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672677281Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672689744Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672703440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672717597Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 04:20:24.673976 containerd[1461]: time="2025-05-13T04:20:24.672733025Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672746982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672765587Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672789211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672804309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672817103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672830348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672843242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672857459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672883648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672898756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672912502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672928432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672941206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674283 containerd[1461]: time="2025-05-13T04:20:24.672953629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.672966002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.672983455Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673011708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673024903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673037186Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673073875Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673091147Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673173602Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673189341Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673200893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673218286Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673233895Z" level=info msg="NRI interface is disabled by configuration." May 13 04:20:24.674552 containerd[1461]: time="2025-05-13T04:20:24.673244845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 04:20:24.674808 containerd[1461]: time="2025-05-13T04:20:24.673521384Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 04:20:24.674808 containerd[1461]: time="2025-05-13T04:20:24.673591225Z" level=info msg="Connect containerd service" May 13 04:20:24.674808 containerd[1461]: time="2025-05-13T04:20:24.673619448Z" level=info msg="using legacy CRI server" May 13 04:20:24.674808 containerd[1461]: time="2025-05-13T04:20:24.673626481Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 04:20:24.674808 containerd[1461]: time="2025-05-13T04:20:24.673723253Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 04:20:24.676553 containerd[1461]: time="2025-05-13T04:20:24.676516672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 04:20:24.676700 containerd[1461]: time="2025-05-13T04:20:24.676662585Z" level=info msg="Start subscribing containerd event" May 13 04:20:24.676733 containerd[1461]: time="2025-05-13T04:20:24.676715855Z" level=info msg="Start recovering state" May 13 04:20:24.676801 containerd[1461]: time="2025-05-13T04:20:24.676782851Z" level=info msg="Start event monitor" May 13 04:20:24.676827 containerd[1461]: time="2025-05-13T04:20:24.676801406Z" level=info msg="Start snapshots syncer" May 13 04:20:24.676827 containerd[1461]: time="2025-05-13T04:20:24.676811925Z" level=info msg="Start cni network conf syncer for default" May 13 04:20:24.676827 containerd[1461]: time="2025-05-13T04:20:24.676820652Z" level=info msg="Start streaming server" May 13 04:20:24.677259 containerd[1461]: time="2025-05-13T04:20:24.677199041Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 04:20:24.677259 containerd[1461]: time="2025-05-13T04:20:24.677248735Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 04:20:24.678312 containerd[1461]: time="2025-05-13T04:20:24.678032264Z" level=info msg="containerd successfully booted in 0.051507s" May 13 04:20:24.678119 systemd[1]: Started containerd.service - containerd container runtime. May 13 04:20:24.860973 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 04:20:24.884592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 04:20:24.901409 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 04:20:24.906761 systemd[1]: Started sshd@0-172.24.4.57:22-172.24.4.1:51672.service - OpenSSH per-connection server daemon (172.24.4.1:51672). May 13 04:20:24.915212 systemd[1]: issuegen.service: Deactivated successfully. May 13 04:20:24.915514 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 04:20:24.923448 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 04:20:24.947361 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 04:20:24.957492 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 04:20:24.971478 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 04:20:24.972275 systemd[1]: Reached target getty.target - Login Prompts. May 13 04:20:25.009239 tar[1453]: linux-amd64/LICENSE May 13 04:20:25.009239 tar[1453]: linux-amd64/README.md May 13 04:20:25.022631 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 04:20:25.223354 systemd-networkd[1374]: eth0: Gained IPv6LL May 13 04:20:25.224387 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 13 04:20:25.230330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 04:20:25.237665 systemd[1]: Reached target network-online.target - Network is Online. May 13 04:20:25.248601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:20:25.263923 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 04:20:25.333827 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 04:20:25.818250 sshd[1515]: Accepted publickey for core from 172.24.4.1 port 51672 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:25.821937 sshd[1515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:25.848372 systemd-logind[1439]: New session 1 of user core. May 13 04:20:25.849757 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 04:20:25.862442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 04:20:25.885550 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 04:20:25.898431 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 04:20:25.905960 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 04:20:26.023862 systemd[1542]: Queued start job for default target default.target. May 13 04:20:26.028981 systemd[1542]: Created slice app.slice - User Application Slice. May 13 04:20:26.029075 systemd[1542]: Reached target paths.target - Paths. May 13 04:20:26.029193 systemd[1542]: Reached target timers.target - Timers. May 13 04:20:26.034213 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 04:20:26.042987 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 04:20:26.043712 systemd[1542]: Reached target sockets.target - Sockets. May 13 04:20:26.043730 systemd[1542]: Reached target basic.target - Basic System. May 13 04:20:26.043767 systemd[1542]: Reached target default.target - Main User Target. May 13 04:20:26.043793 systemd[1542]: Startup finished in 130ms. May 13 04:20:26.043896 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 04:20:26.057326 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 04:20:26.549848 systemd[1]: Started sshd@1-172.24.4.57:22-172.24.4.1:52602.service - OpenSSH per-connection server daemon (172.24.4.1:52602). May 13 04:20:27.186388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:20:27.189937 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:20:28.227949 sshd[1553]: Accepted publickey for core from 172.24.4.1 port 52602 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:28.230677 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:28.243471 systemd-logind[1439]: New session 2 of user core. May 13 04:20:28.250837 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 04:20:28.672434 kubelet[1560]: E0513 04:20:28.672279 1560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:20:28.676950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:20:28.677372 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:20:28.678198 systemd[1]: kubelet.service: Consumed 2.162s CPU time. May 13 04:20:28.963859 sshd[1553]: pam_unix(sshd:session): session closed for user core May 13 04:20:28.976502 systemd[1]: sshd@1-172.24.4.57:22-172.24.4.1:52602.service: Deactivated successfully. May 13 04:20:28.979507 systemd[1]: session-2.scope: Deactivated successfully. May 13 04:20:28.982582 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. May 13 04:20:28.994816 systemd[1]: Started sshd@2-172.24.4.57:22-172.24.4.1:52618.service - OpenSSH per-connection server daemon (172.24.4.1:52618). May 13 04:20:29.004552 systemd-logind[1439]: Removed session 2. May 13 04:20:30.012808 login[1522]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 04:20:30.028223 systemd-logind[1439]: New session 3 of user core. May 13 04:20:30.033214 login[1523]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 04:20:30.039592 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 04:20:30.056393 systemd-logind[1439]: New session 4 of user core. May 13 04:20:30.063459 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 04:20:30.257075 sshd[1574]: Accepted publickey for core from 172.24.4.1 port 52618 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:30.259999 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:30.269562 systemd-logind[1439]: New session 5 of user core. May 13 04:20:30.280536 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 04:20:30.909748 sshd[1574]: pam_unix(sshd:session): session closed for user core May 13 04:20:30.916852 systemd[1]: sshd@2-172.24.4.57:22-172.24.4.1:52618.service: Deactivated successfully. May 13 04:20:30.921433 systemd[1]: session-5.scope: Deactivated successfully. May 13 04:20:30.923667 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. May 13 04:20:30.926935 systemd-logind[1439]: Removed session 5. May 13 04:20:31.157953 coreos-metadata[1428]: May 13 04:20:31.157 WARN failed to locate config-drive, using the metadata service API instead May 13 04:20:31.205713 coreos-metadata[1428]: May 13 04:20:31.205 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 04:20:31.518554 coreos-metadata[1428]: May 13 04:20:31.518 INFO Fetch successful May 13 04:20:31.518554 coreos-metadata[1428]: May 13 04:20:31.518 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 04:20:31.532526 coreos-metadata[1428]: May 13 04:20:31.532 INFO Fetch successful May 13 04:20:31.532526 coreos-metadata[1428]: May 13 04:20:31.532 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 04:20:31.546467 coreos-metadata[1428]: May 13 04:20:31.546 INFO Fetch successful May 13 04:20:31.546467 coreos-metadata[1428]: May 13 04:20:31.546 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 04:20:31.560642 coreos-metadata[1428]: May 13 04:20:31.560 INFO Fetch successful May 13 04:20:31.560642 coreos-metadata[1428]: May 13 04:20:31.560 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 04:20:31.574570 coreos-metadata[1428]: May 13 04:20:31.574 INFO Fetch successful May 13 04:20:31.574570 coreos-metadata[1428]: May 13 04:20:31.574 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 04:20:31.583709 coreos-metadata[1490]: May 13 04:20:31.583 WARN failed to locate config-drive, using the metadata service API instead May 13 04:20:31.589368 coreos-metadata[1428]: May 13 04:20:31.589 INFO Fetch successful May 13 04:20:31.628454 coreos-metadata[1490]: May 13 04:20:31.628 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 04:20:31.635959 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 04:20:31.637271 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 04:20:31.645022 coreos-metadata[1490]: May 13 04:20:31.644 INFO Fetch successful May 13 04:20:31.645487 coreos-metadata[1490]: May 13 04:20:31.645 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 04:20:31.660396 coreos-metadata[1490]: May 13 04:20:31.660 INFO Fetch successful May 13 04:20:31.666564 unknown[1490]: wrote ssh authorized keys file for user: core May 13 04:20:31.708954 update-ssh-keys[1614]: Updated "/home/core/.ssh/authorized_keys" May 13 04:20:31.709908 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 04:20:31.712645 systemd[1]: Finished sshkeys.service. May 13 04:20:31.718530 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 04:20:31.719090 systemd[1]: Startup finished in 1.155s (kernel) + 15.451s (initrd) + 10.535s (userspace) = 27.141s. May 13 04:20:38.928079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 04:20:38.941546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:20:39.259395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:20:39.262718 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:20:39.346200 kubelet[1625]: E0513 04:20:39.346087 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:20:39.353580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:20:39.353889 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:20:40.935653 systemd[1]: Started sshd@3-172.24.4.57:22-172.24.4.1:46106.service - OpenSSH per-connection server daemon (172.24.4.1:46106). May 13 04:20:42.822144 sshd[1633]: Accepted publickey for core from 172.24.4.1 port 46106 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:42.825025 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:42.836039 systemd-logind[1439]: New session 6 of user core. May 13 04:20:42.845413 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 04:20:43.688876 sshd[1633]: pam_unix(sshd:session): session closed for user core May 13 04:20:43.701347 systemd[1]: sshd@3-172.24.4.57:22-172.24.4.1:46106.service: Deactivated successfully. May 13 04:20:43.704306 systemd[1]: session-6.scope: Deactivated successfully. May 13 04:20:43.705981 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. May 13 04:20:43.713714 systemd[1]: Started sshd@4-172.24.4.57:22-172.24.4.1:56782.service - OpenSSH per-connection server daemon (172.24.4.1:56782). May 13 04:20:43.716538 systemd-logind[1439]: Removed session 6. May 13 04:20:45.064452 sshd[1640]: Accepted publickey for core from 172.24.4.1 port 56782 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:45.067058 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:45.076895 systemd-logind[1439]: New session 7 of user core. May 13 04:20:45.088475 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 04:20:45.707030 sshd[1640]: pam_unix(sshd:session): session closed for user core May 13 04:20:45.716600 systemd[1]: sshd@4-172.24.4.57:22-172.24.4.1:56782.service: Deactivated successfully. May 13 04:20:45.719757 systemd[1]: session-7.scope: Deactivated successfully. May 13 04:20:45.721733 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. May 13 04:20:45.730686 systemd[1]: Started sshd@5-172.24.4.57:22-172.24.4.1:56792.service - OpenSSH per-connection server daemon (172.24.4.1:56792). May 13 04:20:45.734232 systemd-logind[1439]: Removed session 7. May 13 04:20:46.996402 sshd[1647]: Accepted publickey for core from 172.24.4.1 port 56792 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:46.998931 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:47.009195 systemd-logind[1439]: New session 8 of user core. May 13 04:20:47.014391 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 04:20:47.637166 sshd[1647]: pam_unix(sshd:session): session closed for user core May 13 04:20:47.647176 systemd[1]: sshd@5-172.24.4.57:22-172.24.4.1:56792.service: Deactivated successfully. May 13 04:20:47.649770 systemd[1]: session-8.scope: Deactivated successfully. May 13 04:20:47.651258 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. May 13 04:20:47.660690 systemd[1]: Started sshd@6-172.24.4.57:22-172.24.4.1:56806.service - OpenSSH per-connection server daemon (172.24.4.1:56806). May 13 04:20:47.662882 systemd-logind[1439]: Removed session 8. May 13 04:20:49.066049 sshd[1654]: Accepted publickey for core from 172.24.4.1 port 56806 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:49.068630 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:49.077450 systemd-logind[1439]: New session 9 of user core. May 13 04:20:49.086363 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 04:20:49.604706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 04:20:49.624655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:20:49.823732 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 04:20:49.825244 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:20:49.850530 sudo[1660]: pam_unix(sudo:session): session closed for user root May 13 04:20:50.005396 sshd[1654]: pam_unix(sshd:session): session closed for user core May 13 04:20:50.019875 systemd[1]: sshd@6-172.24.4.57:22-172.24.4.1:56806.service: Deactivated successfully. May 13 04:20:50.029532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:20:50.030851 systemd[1]: session-9.scope: Deactivated successfully. May 13 04:20:50.035731 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. May 13 04:20:50.045785 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:20:50.045825 systemd[1]: Started sshd@7-172.24.4.57:22-172.24.4.1:56808.service - OpenSSH per-connection server daemon (172.24.4.1:56808). May 13 04:20:50.050811 systemd-logind[1439]: Removed session 9. May 13 04:20:50.103576 kubelet[1668]: E0513 04:20:50.103509 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:20:50.106190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:20:50.106382 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:20:51.566198 sshd[1671]: Accepted publickey for core from 172.24.4.1 port 56808 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:51.569426 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:51.581224 systemd-logind[1439]: New session 10 of user core. May 13 04:20:51.591494 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 04:20:51.960052 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 04:20:51.962024 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:20:51.969800 sudo[1681]: pam_unix(sudo:session): session closed for user root May 13 04:20:51.981490 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 04:20:51.982407 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:20:52.021506 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 04:20:52.024662 auditctl[1684]: No rules May 13 04:20:52.025407 systemd[1]: audit-rules.service: Deactivated successfully. May 13 04:20:52.026021 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 04:20:52.036174 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 04:20:52.090840 augenrules[1702]: No rules May 13 04:20:52.091929 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 04:20:52.093849 sudo[1680]: pam_unix(sudo:session): session closed for user root May 13 04:20:52.309449 sshd[1671]: pam_unix(sshd:session): session closed for user core May 13 04:20:52.322845 systemd[1]: sshd@7-172.24.4.57:22-172.24.4.1:56808.service: Deactivated successfully. May 13 04:20:52.325589 systemd[1]: session-10.scope: Deactivated successfully. May 13 04:20:52.328544 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. May 13 04:20:52.340718 systemd[1]: Started sshd@8-172.24.4.57:22-172.24.4.1:56816.service - OpenSSH per-connection server daemon (172.24.4.1:56816). May 13 04:20:52.344436 systemd-logind[1439]: Removed session 10. May 13 04:20:53.603568 sshd[1710]: Accepted publickey for core from 172.24.4.1 port 56816 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:20:53.606091 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:20:53.617210 systemd-logind[1439]: New session 11 of user core. May 13 04:20:53.623423 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 04:20:54.028302 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 04:20:54.028922 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 04:20:54.667321 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 04:20:54.680677 (dockerd)[1728]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 04:20:55.311182 dockerd[1728]: time="2025-05-13T04:20:55.311068285Z" level=info msg="Starting up" May 13 04:20:55.478540 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4018141796-merged.mount: Deactivated successfully. May 13 04:20:55.525898 dockerd[1728]: time="2025-05-13T04:20:55.525656185Z" level=info msg="Loading containers: start." May 13 04:20:55.682171 kernel: Initializing XFRM netlink socket May 13 04:20:56.701918 systemd-timesyncd[1377]: Contacted time server 69.164.213.136:123 (2.flatcar.pool.ntp.org). May 13 04:20:56.701985 systemd-timesyncd[1377]: Initial clock synchronization to Tue 2025-05-13 04:20:56.701718 UTC. May 13 04:20:56.702412 systemd-resolved[1376]: Clock change detected. Flushing caches. May 13 04:20:56.716564 systemd-networkd[1374]: docker0: Link UP May 13 04:20:56.740457 dockerd[1728]: time="2025-05-13T04:20:56.740395461Z" level=info msg="Loading containers: done." May 13 04:20:56.769105 dockerd[1728]: time="2025-05-13T04:20:56.769055122Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 04:20:56.769603 dockerd[1728]: time="2025-05-13T04:20:56.769223027Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 04:20:56.769603 dockerd[1728]: time="2025-05-13T04:20:56.769313096Z" level=info msg="Daemon has completed initialization" May 13 04:20:56.824874 dockerd[1728]: time="2025-05-13T04:20:56.824506421Z" level=info msg="API listen on /run/docker.sock" May 13 04:20:56.825369 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 04:20:58.482818 containerd[1461]: time="2025-05-13T04:20:58.482759865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 04:20:59.269831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382750302.mount: Deactivated successfully. May 13 04:21:00.900445 containerd[1461]: time="2025-05-13T04:21:00.900377426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:00.901918 containerd[1461]: time="2025-05-13T04:21:00.901625647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 13 04:21:00.903253 containerd[1461]: time="2025-05-13T04:21:00.903196102Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:00.906704 containerd[1461]: time="2025-05-13T04:21:00.906627538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:00.908216 containerd[1461]: time="2025-05-13T04:21:00.907790609Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.424456086s" May 13 04:21:00.908216 containerd[1461]: time="2025-05-13T04:21:00.907825765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 04:21:00.909832 containerd[1461]: time="2025-05-13T04:21:00.909805288Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 04:21:01.082197 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 04:21:01.094752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:01.230330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:01.233732 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:21:01.277848 kubelet[1930]: E0513 04:21:01.277781 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:21:01.282348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:21:01.282856 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:21:03.106709 containerd[1461]: time="2025-05-13T04:21:03.106634150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:03.108394 containerd[1461]: time="2025-05-13T04:21:03.108110388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 13 04:21:03.109678 containerd[1461]: time="2025-05-13T04:21:03.109613578Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:03.113276 containerd[1461]: time="2025-05-13T04:21:03.113231603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:03.114590 containerd[1461]: time="2025-05-13T04:21:03.114452623Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.204613732s" May 13 04:21:03.114590 containerd[1461]: time="2025-05-13T04:21:03.114497828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 04:21:03.115481 containerd[1461]: time="2025-05-13T04:21:03.115124153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 04:21:04.811059 containerd[1461]: time="2025-05-13T04:21:04.810997022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:04.812377 containerd[1461]: time="2025-05-13T04:21:04.812227289Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 13 04:21:04.813509 containerd[1461]: time="2025-05-13T04:21:04.813450083Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:04.816899 containerd[1461]: time="2025-05-13T04:21:04.816857223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:04.818148 containerd[1461]: time="2025-05-13T04:21:04.817978806Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.702800542s" May 13 04:21:04.818148 containerd[1461]: time="2025-05-13T04:21:04.818031024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 04:21:04.818415 containerd[1461]: time="2025-05-13T04:21:04.818393153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 04:21:06.173859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238876922.mount: Deactivated successfully. May 13 04:21:06.717310 containerd[1461]: time="2025-05-13T04:21:06.717267802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:06.718451 containerd[1461]: time="2025-05-13T04:21:06.718413230Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 13 04:21:06.719461 containerd[1461]: time="2025-05-13T04:21:06.719419808Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:06.721801 containerd[1461]: time="2025-05-13T04:21:06.721778502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:06.722789 containerd[1461]: time="2025-05-13T04:21:06.722455071Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.903158704s" May 13 04:21:06.722789 containerd[1461]: time="2025-05-13T04:21:06.722497981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 04:21:06.723015 containerd[1461]: time="2025-05-13T04:21:06.722997238Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 04:21:07.358329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628320702.mount: Deactivated successfully. May 13 04:21:08.546591 containerd[1461]: time="2025-05-13T04:21:08.546522749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:08.548354 containerd[1461]: time="2025-05-13T04:21:08.548091592Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 04:21:08.549668 containerd[1461]: time="2025-05-13T04:21:08.549607325Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:08.553051 containerd[1461]: time="2025-05-13T04:21:08.553011359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:08.554381 containerd[1461]: time="2025-05-13T04:21:08.554259981Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.831150101s" May 13 04:21:08.554381 containerd[1461]: time="2025-05-13T04:21:08.554293503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 04:21:08.554909 containerd[1461]: time="2025-05-13T04:21:08.554883310Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 04:21:09.521127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127441360.mount: Deactivated successfully. May 13 04:21:09.534102 containerd[1461]: time="2025-05-13T04:21:09.533754072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:09.535994 containerd[1461]: time="2025-05-13T04:21:09.535813774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 04:21:09.537058 containerd[1461]: time="2025-05-13T04:21:09.536884964Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:09.542298 containerd[1461]: time="2025-05-13T04:21:09.542165167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:09.545937 containerd[1461]: time="2025-05-13T04:21:09.544508031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 989.507501ms" May 13 04:21:09.545937 containerd[1461]: time="2025-05-13T04:21:09.544600885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 04:21:09.546203 containerd[1461]: time="2025-05-13T04:21:09.545949234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 04:21:10.191233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026047211.mount: Deactivated successfully. May 13 04:21:10.522743 update_engine[1442]: I20250513 04:21:10.522049 1442 update_attempter.cc:509] Updating boot flags... May 13 04:21:10.581010 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2030) May 13 04:21:10.638009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2025) May 13 04:21:11.332368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 04:21:11.342102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:11.872898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:11.884155 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 04:21:11.947235 kubelet[2073]: E0513 04:21:11.947193 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 04:21:11.949066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 04:21:11.949212 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 04:21:13.223732 containerd[1461]: time="2025-05-13T04:21:13.221790433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:13.227600 containerd[1461]: time="2025-05-13T04:21:13.223223300Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 13 04:21:13.229067 containerd[1461]: time="2025-05-13T04:21:13.228904094Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:13.273546 containerd[1461]: time="2025-05-13T04:21:13.273403493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:13.277295 containerd[1461]: time="2025-05-13T04:21:13.277231362Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.731159418s" May 13 04:21:13.278124 containerd[1461]: time="2025-05-13T04:21:13.277456965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 04:21:17.107860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:17.115302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:17.152190 systemd[1]: Reloading requested from client PID 2114 ('systemctl') (unit session-11.scope)... May 13 04:21:17.152206 systemd[1]: Reloading... May 13 04:21:17.259003 zram_generator::config[2159]: No configuration found. May 13 04:21:17.388894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:21:17.474394 systemd[1]: Reloading finished in 321 ms. May 13 04:21:17.525403 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 04:21:17.525476 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 04:21:17.525741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:17.530372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:17.626173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:17.640285 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 04:21:17.716222 kubelet[2220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:21:17.716222 kubelet[2220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 04:21:17.716222 kubelet[2220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:21:17.859034 kubelet[2220]: I0513 04:21:17.858859 2220 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 04:21:18.557744 kubelet[2220]: I0513 04:21:18.557682 2220 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 04:21:18.557744 kubelet[2220]: I0513 04:21:18.557717 2220 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 04:21:18.558225 kubelet[2220]: I0513 04:21:18.558022 2220 server.go:929] "Client rotation is on, will bootstrap in background" May 13 04:21:18.586124 kubelet[2220]: I0513 04:21:18.585770 2220 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 04:21:18.587855 kubelet[2220]: E0513 04:21:18.587785 2220 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:18.601683 kubelet[2220]: E0513 04:21:18.601456 2220 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 04:21:18.601683 kubelet[2220]: I0513 04:21:18.601539 2220 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 04:21:18.612315 kubelet[2220]: I0513 04:21:18.612281 2220 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 04:21:18.612495 kubelet[2220]: I0513 04:21:18.612450 2220 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 04:21:18.612759 kubelet[2220]: I0513 04:21:18.612709 2220 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 04:21:18.613184 kubelet[2220]: I0513 04:21:18.612765 2220 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-3bdfb8ea63.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 04:21:18.613295 kubelet[2220]: I0513 04:21:18.613199 2220 topology_manager.go:138] "Creating topology manager with none policy" May 13 04:21:18.613295 kubelet[2220]: I0513 04:21:18.613222 2220 container_manager_linux.go:300] "Creating device plugin manager" May 13 04:21:18.613419 kubelet[2220]: I0513 04:21:18.613384 2220 state_mem.go:36] "Initialized new in-memory state store" May 13 04:21:18.619552 kubelet[2220]: I0513 04:21:18.619509 2220 kubelet.go:408] "Attempting to sync node with API server" May 13 04:21:18.619616 kubelet[2220]: I0513 04:21:18.619560 2220 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 04:21:18.619616 kubelet[2220]: I0513 04:21:18.619612 2220 kubelet.go:314] "Adding apiserver pod source" May 13 04:21:18.619672 kubelet[2220]: I0513 04:21:18.619641 2220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 04:21:18.632432 kubelet[2220]: I0513 04:21:18.632264 2220 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 04:21:18.635517 kubelet[2220]: I0513 04:21:18.635485 2220 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 04:21:18.637011 kubelet[2220]: W0513 04:21:18.636820 2220 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 04:21:18.637909 kubelet[2220]: I0513 04:21:18.637884 2220 server.go:1269] "Started kubelet" May 13 04:21:18.638503 kubelet[2220]: W0513 04:21:18.638248 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3bdfb8ea63.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:18.638503 kubelet[2220]: E0513 04:21:18.638337 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3bdfb8ea63.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:18.642248 kubelet[2220]: W0513 04:21:18.642175 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:18.642375 kubelet[2220]: E0513 04:21:18.642275 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:18.642432 kubelet[2220]: I0513 04:21:18.642363 2220 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 04:21:18.646113 kubelet[2220]: I0513 04:21:18.646062 2220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 04:21:18.654527 kubelet[2220]: I0513 04:21:18.654150 2220 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 04:21:18.654681 kubelet[2220]: I0513 04:21:18.654581 2220 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 04:21:18.658028 kubelet[2220]: I0513 04:21:18.657425 2220 server.go:460] "Adding debug handlers to kubelet server" May 13 04:21:18.659998 kubelet[2220]: I0513 04:21:18.659330 2220 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 04:21:18.659998 kubelet[2220]: E0513 04:21:18.654911 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.57:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-3bdfb8ea63.novalocal.183efb5c24b6bab6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-3bdfb8ea63.novalocal,UID:ci-4081-3-3-n-3bdfb8ea63.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-3bdfb8ea63.novalocal,},FirstTimestamp:2025-05-13 04:21:18.63784927 +0000 UTC m=+0.990926213,LastTimestamp:2025-05-13 04:21:18.63784927 +0000 UTC m=+0.990926213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-3bdfb8ea63.novalocal,}" May 13 04:21:18.663630 kubelet[2220]: I0513 04:21:18.663571 2220 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 04:21:18.665396 kubelet[2220]: E0513 04:21:18.663906 2220 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-3bdfb8ea63.novalocal\" not found" May 13 04:21:18.665396 kubelet[2220]: I0513 04:21:18.664754 2220 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 04:21:18.665396 kubelet[2220]: I0513 04:21:18.664840 2220 reconciler.go:26] "Reconciler: start to sync state" May 13 04:21:18.665598 kubelet[2220]: E0513 04:21:18.665390 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3bdfb8ea63.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="200ms" May 13 04:21:18.667808 kubelet[2220]: W0513 04:21:18.667727 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:18.667904 kubelet[2220]: E0513 04:21:18.667821 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:18.668614 kubelet[2220]: I0513 04:21:18.668553 2220 factory.go:221] Registration of the containerd container factory successfully May 13 04:21:18.668614 kubelet[2220]: I0513 04:21:18.668588 2220 factory.go:221] Registration of the systemd container factory successfully May 13 04:21:18.668735 kubelet[2220]: I0513 04:21:18.668690 2220 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 04:21:18.683440 kubelet[2220]: I0513 04:21:18.683192 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 04:21:18.684981 kubelet[2220]: I0513 04:21:18.684676 2220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 04:21:18.684981 kubelet[2220]: I0513 04:21:18.684702 2220 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 04:21:18.684981 kubelet[2220]: I0513 04:21:18.684723 2220 kubelet.go:2321] "Starting kubelet main sync loop" May 13 04:21:18.684981 kubelet[2220]: E0513 04:21:18.684760 2220 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 04:21:18.692773 kubelet[2220]: W0513 04:21:18.692730 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:18.692893 kubelet[2220]: E0513 04:21:18.692874 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:18.711435 kubelet[2220]: I0513 04:21:18.711411 2220 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 04:21:18.711435 kubelet[2220]: I0513 04:21:18.711432 2220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 04:21:18.711565 kubelet[2220]: I0513 04:21:18.711448 2220 state_mem.go:36] "Initialized new in-memory state store" May 13 04:21:18.715339 kubelet[2220]: I0513 04:21:18.715313 2220 policy_none.go:49] "None policy: Start" May 13 04:21:18.716142 kubelet[2220]: I0513 04:21:18.715830 2220 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 04:21:18.716142 kubelet[2220]: I0513 04:21:18.715854 2220 state_mem.go:35] "Initializing new in-memory state store" May 13 04:21:18.724223 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 04:21:18.741689 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 04:21:18.744997 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 04:21:18.754726 kubelet[2220]: I0513 04:21:18.754703 2220 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 04:21:18.755338 kubelet[2220]: I0513 04:21:18.754863 2220 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 04:21:18.755338 kubelet[2220]: I0513 04:21:18.754874 2220 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 04:21:18.755338 kubelet[2220]: I0513 04:21:18.755165 2220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 04:21:18.756724 kubelet[2220]: E0513 04:21:18.756675 2220 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-3bdfb8ea63.novalocal\" not found" May 13 04:21:18.807768 systemd[1]: Created slice kubepods-burstable-pod5d48212dffb5d1360e953f9cdc052ee6.slice - libcontainer container kubepods-burstable-pod5d48212dffb5d1360e953f9cdc052ee6.slice. May 13 04:21:18.829539 systemd[1]: Created slice kubepods-burstable-pod6ec3addbe550a4054e2f57d990a17e7f.slice - libcontainer container kubepods-burstable-pod6ec3addbe550a4054e2f57d990a17e7f.slice. May 13 04:21:18.853429 systemd[1]: Created slice kubepods-burstable-pod24a0dcbf983b8c26ed5b80d35f3fae6e.slice - libcontainer container kubepods-burstable-pod24a0dcbf983b8c26ed5b80d35f3fae6e.slice. May 13 04:21:18.858356 kubelet[2220]: I0513 04:21:18.858220 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.859679 kubelet[2220]: E0513 04:21:18.858932 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.866805 kubelet[2220]: E0513 04:21:18.866744 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3bdfb8ea63.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="400ms" May 13 04:21:18.966647 kubelet[2220]: I0513 04:21:18.966500 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966647 kubelet[2220]: I0513 04:21:18.966588 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966647 kubelet[2220]: I0513 04:21:18.966646 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966951 kubelet[2220]: I0513 04:21:18.966691 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966951 kubelet[2220]: I0513 04:21:18.966742 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966951 kubelet[2220]: I0513 04:21:18.966790 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d48212dffb5d1360e953f9cdc052ee6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"5d48212dffb5d1360e953f9cdc052ee6\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.966951 kubelet[2220]: I0513 04:21:18.966835 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.967255 kubelet[2220]: I0513 04:21:18.966880 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:18.967255 kubelet[2220]: I0513 04:21:18.966925 2220 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:19.062690 kubelet[2220]: I0513 04:21:19.062512 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:19.063253 kubelet[2220]: E0513 04:21:19.063173 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:19.122740 containerd[1461]: time="2025-05-13T04:21:19.122617012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:5d48212dffb5d1360e953f9cdc052ee6,Namespace:kube-system,Attempt:0,}" May 13 04:21:19.146819 containerd[1461]: time="2025-05-13T04:21:19.146694038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:6ec3addbe550a4054e2f57d990a17e7f,Namespace:kube-system,Attempt:0,}" May 13 04:21:19.161368 containerd[1461]: time="2025-05-13T04:21:19.161021469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:24a0dcbf983b8c26ed5b80d35f3fae6e,Namespace:kube-system,Attempt:0,}" May 13 04:21:19.268500 kubelet[2220]: E0513 04:21:19.268396 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3bdfb8ea63.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="800ms" May 13 04:21:19.474035 kubelet[2220]: I0513 04:21:19.473274 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:19.474471 kubelet[2220]: E0513 04:21:19.474396 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:19.517439 kubelet[2220]: E0513 04:21:19.517242 2220 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.57:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.57:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-3bdfb8ea63.novalocal.183efb5c24b6bab6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-3bdfb8ea63.novalocal,UID:ci-4081-3-3-n-3bdfb8ea63.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-3bdfb8ea63.novalocal,},FirstTimestamp:2025-05-13 04:21:18.63784927 +0000 UTC m=+0.990926213,LastTimestamp:2025-05-13 04:21:18.63784927 +0000 UTC m=+0.990926213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-3bdfb8ea63.novalocal,}" May 13 04:21:19.739012 kubelet[2220]: W0513 04:21:19.738744 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:19.739012 kubelet[2220]: E0513 04:21:19.738842 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:19.880808 kubelet[2220]: W0513 04:21:19.880713 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:19.881656 kubelet[2220]: E0513 04:21:19.881566 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:19.928098 kubelet[2220]: W0513 04:21:19.927867 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3bdfb8ea63.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:19.928098 kubelet[2220]: E0513 04:21:19.928037 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-3bdfb8ea63.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:20.006289 kubelet[2220]: W0513 04:21:20.005888 2220 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.57:6443: connect: connection refused May 13 04:21:20.007064 kubelet[2220]: E0513 04:21:20.006812 2220 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.57:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:20.038574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192681345.mount: Deactivated successfully. May 13 04:21:20.050090 containerd[1461]: time="2025-05-13T04:21:20.049762933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:21:20.055341 containerd[1461]: time="2025-05-13T04:21:20.055187397Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 13 04:21:20.056767 containerd[1461]: time="2025-05-13T04:21:20.056672472Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:21:20.058803 containerd[1461]: time="2025-05-13T04:21:20.058721676Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:21:20.061741 containerd[1461]: time="2025-05-13T04:21:20.061536885Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 04:21:20.061741 containerd[1461]: time="2025-05-13T04:21:20.061695152Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:21:20.063713 containerd[1461]: time="2025-05-13T04:21:20.063569588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 04:21:20.068607 containerd[1461]: time="2025-05-13T04:21:20.068511025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 04:21:20.069633 kubelet[2220]: E0513 04:21:20.069562 2220 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-3bdfb8ea63.novalocal?timeout=10s\": dial tcp 172.24.4.57:6443: connect: connection refused" interval="1.6s" May 13 04:21:20.076158 containerd[1461]: time="2025-05-13T04:21:20.075289408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 914.111526ms" May 13 04:21:20.080601 containerd[1461]: time="2025-05-13T04:21:20.080252206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 957.447883ms" May 13 04:21:20.092565 containerd[1461]: time="2025-05-13T04:21:20.092469930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 945.648253ms" May 13 04:21:20.283118 kubelet[2220]: I0513 04:21:20.282813 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:20.283806 kubelet[2220]: E0513 04:21:20.283131 2220 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.57:6443/api/v1/nodes\": dial tcp 172.24.4.57:6443: connect: connection refused" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:20.307290 containerd[1461]: time="2025-05-13T04:21:20.307096543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:20.308473 containerd[1461]: time="2025-05-13T04:21:20.308320729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:20.309369 containerd[1461]: time="2025-05-13T04:21:20.307239711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:20.309653 containerd[1461]: time="2025-05-13T04:21:20.309421613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.309855 containerd[1461]: time="2025-05-13T04:21:20.309753365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.311373 containerd[1461]: time="2025-05-13T04:21:20.310474247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:20.311373 containerd[1461]: time="2025-05-13T04:21:20.310522498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.311373 containerd[1461]: time="2025-05-13T04:21:20.310654886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.312025 containerd[1461]: time="2025-05-13T04:21:20.311762494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:20.312025 containerd[1461]: time="2025-05-13T04:21:20.311813489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:20.312025 containerd[1461]: time="2025-05-13T04:21:20.311836322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.312025 containerd[1461]: time="2025-05-13T04:21:20.311920640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:20.335126 systemd[1]: Started cri-containerd-6e56bdc6ea229d05cfde8fe4de99334a977f67d8292be645eb5384f60d61dda8.scope - libcontainer container 6e56bdc6ea229d05cfde8fe4de99334a977f67d8292be645eb5384f60d61dda8. May 13 04:21:20.349120 systemd[1]: Started cri-containerd-37cd7c813485345a15bb114d0db76a7f55450baa94d182a37b9c73c976c05d08.scope - libcontainer container 37cd7c813485345a15bb114d0db76a7f55450baa94d182a37b9c73c976c05d08. May 13 04:21:20.350978 systemd[1]: Started cri-containerd-8508c4537c191b059217d81bfed15e1c6ed069aaceb5d7852f51610f28785005.scope - libcontainer container 8508c4537c191b059217d81bfed15e1c6ed069aaceb5d7852f51610f28785005. May 13 04:21:20.404604 containerd[1461]: time="2025-05-13T04:21:20.404116143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:5d48212dffb5d1360e953f9cdc052ee6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e56bdc6ea229d05cfde8fe4de99334a977f67d8292be645eb5384f60d61dda8\"" May 13 04:21:20.410058 containerd[1461]: time="2025-05-13T04:21:20.409261152Z" level=info msg="CreateContainer within sandbox \"6e56bdc6ea229d05cfde8fe4de99334a977f67d8292be645eb5384f60d61dda8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 04:21:20.419219 containerd[1461]: time="2025-05-13T04:21:20.419117107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:6ec3addbe550a4054e2f57d990a17e7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8508c4537c191b059217d81bfed15e1c6ed069aaceb5d7852f51610f28785005\"" May 13 04:21:20.425814 containerd[1461]: time="2025-05-13T04:21:20.425775134Z" level=info msg="CreateContainer within sandbox \"8508c4537c191b059217d81bfed15e1c6ed069aaceb5d7852f51610f28785005\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 04:21:20.430648 containerd[1461]: time="2025-05-13T04:21:20.430600955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal,Uid:24a0dcbf983b8c26ed5b80d35f3fae6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"37cd7c813485345a15bb114d0db76a7f55450baa94d182a37b9c73c976c05d08\"" May 13 04:21:20.433123 containerd[1461]: time="2025-05-13T04:21:20.432941615Z" level=info msg="CreateContainer within sandbox \"37cd7c813485345a15bb114d0db76a7f55450baa94d182a37b9c73c976c05d08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 04:21:20.460897 containerd[1461]: time="2025-05-13T04:21:20.460849556Z" level=info msg="CreateContainer within sandbox \"6e56bdc6ea229d05cfde8fe4de99334a977f67d8292be645eb5384f60d61dda8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d372cac5a2c89fa962f301f4dc254ddddb08b8456d8b89bf980be6b210e9261d\"" May 13 04:21:20.461670 containerd[1461]: time="2025-05-13T04:21:20.461638095Z" level=info msg="StartContainer for \"d372cac5a2c89fa962f301f4dc254ddddb08b8456d8b89bf980be6b210e9261d\"" May 13 04:21:20.477234 containerd[1461]: time="2025-05-13T04:21:20.477133968Z" level=info msg="CreateContainer within sandbox \"8508c4537c191b059217d81bfed15e1c6ed069aaceb5d7852f51610f28785005\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b95245697a733d3d5a68ff0e76fb891021a354bd8b146381f01c78e981fb0b9\"" May 13 04:21:20.478176 containerd[1461]: time="2025-05-13T04:21:20.478027193Z" level=info msg="StartContainer for \"8b95245697a733d3d5a68ff0e76fb891021a354bd8b146381f01c78e981fb0b9\"" May 13 04:21:20.483148 containerd[1461]: time="2025-05-13T04:21:20.483096090Z" level=info msg="CreateContainer within sandbox \"37cd7c813485345a15bb114d0db76a7f55450baa94d182a37b9c73c976c05d08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"57eacd651eaa5e2c1f7c788bb34b1c98cba74fb83a8535710afc1c12520bf012\"" May 13 04:21:20.484423 containerd[1461]: time="2025-05-13T04:21:20.484248441Z" level=info msg="StartContainer for \"57eacd651eaa5e2c1f7c788bb34b1c98cba74fb83a8535710afc1c12520bf012\"" May 13 04:21:20.492127 systemd[1]: Started cri-containerd-d372cac5a2c89fa962f301f4dc254ddddb08b8456d8b89bf980be6b210e9261d.scope - libcontainer container d372cac5a2c89fa962f301f4dc254ddddb08b8456d8b89bf980be6b210e9261d. May 13 04:21:20.523096 systemd[1]: Started cri-containerd-8b95245697a733d3d5a68ff0e76fb891021a354bd8b146381f01c78e981fb0b9.scope - libcontainer container 8b95245697a733d3d5a68ff0e76fb891021a354bd8b146381f01c78e981fb0b9. May 13 04:21:20.540116 systemd[1]: Started cri-containerd-57eacd651eaa5e2c1f7c788bb34b1c98cba74fb83a8535710afc1c12520bf012.scope - libcontainer container 57eacd651eaa5e2c1f7c788bb34b1c98cba74fb83a8535710afc1c12520bf012. May 13 04:21:20.569698 containerd[1461]: time="2025-05-13T04:21:20.569666643Z" level=info msg="StartContainer for \"d372cac5a2c89fa962f301f4dc254ddddb08b8456d8b89bf980be6b210e9261d\" returns successfully" May 13 04:21:20.608946 containerd[1461]: time="2025-05-13T04:21:20.607994867Z" level=info msg="StartContainer for \"8b95245697a733d3d5a68ff0e76fb891021a354bd8b146381f01c78e981fb0b9\" returns successfully" May 13 04:21:20.622984 containerd[1461]: time="2025-05-13T04:21:20.622900162Z" level=info msg="StartContainer for \"57eacd651eaa5e2c1f7c788bb34b1c98cba74fb83a8535710afc1c12520bf012\" returns successfully" May 13 04:21:20.645140 kubelet[2220]: E0513 04:21:20.645095 2220 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.57:6443: connect: connection refused" logger="UnhandledError" May 13 04:21:21.884975 kubelet[2220]: I0513 04:21:21.884929 2220 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:22.238119 kubelet[2220]: E0513 04:21:22.237886 2220 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-3bdfb8ea63.novalocal\" not found" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:22.321365 kubelet[2220]: I0513 04:21:22.321326 2220 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:22.643138 kubelet[2220]: I0513 04:21:22.642939 2220 apiserver.go:52] "Watching apiserver" May 13 04:21:22.665021 kubelet[2220]: I0513 04:21:22.664932 2220 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 04:21:23.812821 kubelet[2220]: W0513 04:21:23.811949 2220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:24.888065 systemd[1]: Reloading requested from client PID 2484 ('systemctl') (unit session-11.scope)... May 13 04:21:24.888100 systemd[1]: Reloading... May 13 04:21:24.990991 zram_generator::config[2523]: No configuration found. May 13 04:21:25.137535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 04:21:25.237405 systemd[1]: Reloading finished in 348 ms. May 13 04:21:25.280762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:25.296917 systemd[1]: kubelet.service: Deactivated successfully. May 13 04:21:25.297147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:25.297189 systemd[1]: kubelet.service: Consumed 1.342s CPU time, 117.1M memory peak, 0B memory swap peak. May 13 04:21:25.303808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 04:21:25.514296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 04:21:25.527236 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 04:21:25.597583 kubelet[2587]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:21:25.597583 kubelet[2587]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 04:21:25.597583 kubelet[2587]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 04:21:25.598771 kubelet[2587]: I0513 04:21:25.597625 2587 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 04:21:25.604131 kubelet[2587]: I0513 04:21:25.604081 2587 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 04:21:25.604131 kubelet[2587]: I0513 04:21:25.604108 2587 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 04:21:25.604395 kubelet[2587]: I0513 04:21:25.604350 2587 server.go:929] "Client rotation is on, will bootstrap in background" May 13 04:21:25.605719 kubelet[2587]: I0513 04:21:25.605677 2587 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 04:21:25.614683 kubelet[2587]: I0513 04:21:25.614390 2587 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 04:21:25.620839 kubelet[2587]: E0513 04:21:25.620793 2587 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 04:21:25.620839 kubelet[2587]: I0513 04:21:25.620828 2587 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 04:21:25.625186 kubelet[2587]: I0513 04:21:25.625122 2587 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 04:21:25.625423 kubelet[2587]: I0513 04:21:25.625231 2587 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 04:21:25.625423 kubelet[2587]: I0513 04:21:25.625333 2587 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 04:21:25.625566 kubelet[2587]: I0513 04:21:25.625360 2587 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-3bdfb8ea63.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 04:21:25.625566 kubelet[2587]: I0513 04:21:25.625546 2587 topology_manager.go:138] "Creating topology manager with none policy" May 13 04:21:25.625566 kubelet[2587]: I0513 04:21:25.625556 2587 container_manager_linux.go:300] "Creating device plugin manager" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.625586 2587 state_mem.go:36] "Initialized new in-memory state store" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.625680 2587 kubelet.go:408] "Attempting to sync node with API server" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.625694 2587 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.625722 2587 kubelet.go:314] "Adding apiserver pod source" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.625735 2587 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.627827 2587 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.628818 2587 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 04:21:25.638300 kubelet[2587]: I0513 04:21:25.631658 2587 server.go:1269] "Started kubelet" May 13 04:21:25.646040 kubelet[2587]: I0513 04:21:25.645838 2587 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 04:21:25.657670 kubelet[2587]: I0513 04:21:25.657633 2587 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 04:21:25.659044 kubelet[2587]: I0513 04:21:25.658701 2587 server.go:460] "Adding debug handlers to kubelet server" May 13 04:21:25.663750 kubelet[2587]: I0513 04:21:25.663640 2587 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 04:21:25.666897 kubelet[2587]: I0513 04:21:25.666881 2587 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 04:21:25.667012 kubelet[2587]: I0513 04:21:25.664155 2587 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 04:21:25.669389 kubelet[2587]: I0513 04:21:25.669363 2587 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 04:21:25.669798 kubelet[2587]: E0513 04:21:25.669773 2587 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-3bdfb8ea63.novalocal\" not found" May 13 04:21:25.680730 kubelet[2587]: I0513 04:21:25.680695 2587 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 04:21:25.684748 kubelet[2587]: I0513 04:21:25.684625 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 04:21:25.684882 kubelet[2587]: I0513 04:21:25.681669 2587 reconciler.go:26] "Reconciler: start to sync state" May 13 04:21:25.685529 kubelet[2587]: I0513 04:21:25.685494 2587 factory.go:221] Registration of the systemd container factory successfully May 13 04:21:25.685603 kubelet[2587]: I0513 04:21:25.685579 2587 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 04:21:25.686194 kubelet[2587]: I0513 04:21:25.686072 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 04:21:25.686194 kubelet[2587]: I0513 04:21:25.686121 2587 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 04:21:25.686194 kubelet[2587]: I0513 04:21:25.686140 2587 kubelet.go:2321] "Starting kubelet main sync loop" May 13 04:21:25.686572 kubelet[2587]: E0513 04:21:25.686379 2587 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 04:21:25.690837 kubelet[2587]: I0513 04:21:25.690818 2587 factory.go:221] Registration of the containerd container factory successfully May 13 04:21:25.695562 kubelet[2587]: E0513 04:21:25.695534 2587 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 04:21:25.738373 kubelet[2587]: I0513 04:21:25.738344 2587 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 04:21:25.738373 kubelet[2587]: I0513 04:21:25.738362 2587 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 04:21:25.738373 kubelet[2587]: I0513 04:21:25.738379 2587 state_mem.go:36] "Initialized new in-memory state store" May 13 04:21:25.738571 kubelet[2587]: I0513 04:21:25.738527 2587 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 04:21:25.738571 kubelet[2587]: I0513 04:21:25.738539 2587 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 04:21:25.738571 kubelet[2587]: I0513 04:21:25.738558 2587 policy_none.go:49] "None policy: Start" May 13 04:21:25.739434 kubelet[2587]: I0513 04:21:25.739392 2587 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 04:21:25.739434 kubelet[2587]: I0513 04:21:25.739414 2587 state_mem.go:35] "Initializing new in-memory state store" May 13 04:21:25.739566 kubelet[2587]: I0513 04:21:25.739551 2587 state_mem.go:75] "Updated machine memory state" May 13 04:21:25.746018 kubelet[2587]: I0513 04:21:25.745886 2587 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 04:21:25.746209 kubelet[2587]: I0513 04:21:25.746192 2587 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 04:21:25.746246 kubelet[2587]: I0513 04:21:25.746207 2587 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 04:21:25.748982 kubelet[2587]: I0513 04:21:25.748763 2587 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 04:21:25.795491 kubelet[2587]: W0513 04:21:25.794052 2587 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:25.796296 kubelet[2587]: W0513 04:21:25.796256 2587 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:25.797343 kubelet[2587]: W0513 04:21:25.797303 2587 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:25.797397 kubelet[2587]: E0513 04:21:25.797359 2587 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.859613 kubelet[2587]: I0513 04:21:25.857852 2587 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.879610 kubelet[2587]: I0513 04:21:25.879568 2587 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.879897 kubelet[2587]: I0513 04:21:25.879876 2587 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.889484 kubelet[2587]: I0513 04:21:25.889381 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890299 kubelet[2587]: I0513 04:21:25.889814 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890299 kubelet[2587]: I0513 04:21:25.889915 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890299 kubelet[2587]: I0513 04:21:25.890018 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d48212dffb5d1360e953f9cdc052ee6-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"5d48212dffb5d1360e953f9cdc052ee6\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890299 kubelet[2587]: I0513 04:21:25.890059 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890519 kubelet[2587]: I0513 04:21:25.890103 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ec3addbe550a4054e2f57d990a17e7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"6ec3addbe550a4054e2f57d990a17e7f\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890519 kubelet[2587]: I0513 04:21:25.890140 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890519 kubelet[2587]: I0513 04:21:25.890176 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:25.890519 kubelet[2587]: I0513 04:21:25.890209 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24a0dcbf983b8c26ed5b80d35f3fae6e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" (UID: \"24a0dcbf983b8c26ed5b80d35f3fae6e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:26.628976 kubelet[2587]: I0513 04:21:26.627537 2587 apiserver.go:52] "Watching apiserver" May 13 04:21:26.685401 kubelet[2587]: I0513 04:21:26.685347 2587 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 04:21:26.734740 kubelet[2587]: W0513 04:21:26.734489 2587 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:26.734740 kubelet[2587]: E0513 04:21:26.734550 2587 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:26.740827 kubelet[2587]: W0513 04:21:26.740807 2587 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 04:21:26.741073 kubelet[2587]: E0513 04:21:26.741007 2587 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:21:26.748994 kubelet[2587]: I0513 04:21:26.748711 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-3bdfb8ea63.novalocal" podStartSLOduration=3.748696422 podStartE2EDuration="3.748696422s" podCreationTimestamp="2025-05-13 04:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:21:26.747923953 +0000 UTC m=+1.212099333" watchObservedRunningTime="2025-05-13 04:21:26.748696422 +0000 UTC m=+1.212871802" May 13 04:21:26.773872 kubelet[2587]: I0513 04:21:26.773811 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-3bdfb8ea63.novalocal" podStartSLOduration=1.7737952940000001 podStartE2EDuration="1.773795294s" podCreationTimestamp="2025-05-13 04:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:21:26.76437696 +0000 UTC m=+1.228552350" watchObservedRunningTime="2025-05-13 04:21:26.773795294 +0000 UTC m=+1.237970674" May 13 04:21:26.786414 kubelet[2587]: I0513 04:21:26.786316 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-3bdfb8ea63.novalocal" podStartSLOduration=1.7863018400000001 podStartE2EDuration="1.78630184s" podCreationTimestamp="2025-05-13 04:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:21:26.774156131 +0000 UTC m=+1.238331511" watchObservedRunningTime="2025-05-13 04:21:26.78630184 +0000 UTC m=+1.250477231" May 13 04:21:28.946688 kubelet[2587]: I0513 04:21:28.946026 2587 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 04:21:28.946688 kubelet[2587]: I0513 04:21:28.946505 2587 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 04:21:28.947104 containerd[1461]: time="2025-05-13T04:21:28.946321817Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 04:21:29.494983 systemd[1]: Created slice kubepods-besteffort-pod669e0d32_a0a3_4725_9dd8_1438037bd9ce.slice - libcontainer container kubepods-besteffort-pod669e0d32_a0a3_4725_9dd8_1438037bd9ce.slice. May 13 04:21:29.512528 kubelet[2587]: W0513 04:21:29.510043 2587 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object May 13 04:21:29.512528 kubelet[2587]: E0513 04:21:29.512092 2587 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object" logger="UnhandledError" May 13 04:21:29.512528 kubelet[2587]: W0513 04:21:29.512028 2587 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object May 13 04:21:29.512528 kubelet[2587]: E0513 04:21:29.512143 2587 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object" logger="UnhandledError" May 13 04:21:29.518983 kubelet[2587]: I0513 04:21:29.517416 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-proxy\") pod \"kube-proxy-rhmr8\" (UID: \"669e0d32-a0a3-4725-9dd8-1438037bd9ce\") " pod="kube-system/kube-proxy-rhmr8" May 13 04:21:29.518983 kubelet[2587]: I0513 04:21:29.517453 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669e0d32-a0a3-4725-9dd8-1438037bd9ce-xtables-lock\") pod \"kube-proxy-rhmr8\" (UID: \"669e0d32-a0a3-4725-9dd8-1438037bd9ce\") " pod="kube-system/kube-proxy-rhmr8" May 13 04:21:29.518983 kubelet[2587]: I0513 04:21:29.517475 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669e0d32-a0a3-4725-9dd8-1438037bd9ce-lib-modules\") pod \"kube-proxy-rhmr8\" (UID: \"669e0d32-a0a3-4725-9dd8-1438037bd9ce\") " pod="kube-system/kube-proxy-rhmr8" May 13 04:21:29.518983 kubelet[2587]: I0513 04:21:29.517511 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88j95\" (UniqueName: \"kubernetes.io/projected/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-api-access-88j95\") pod \"kube-proxy-rhmr8\" (UID: \"669e0d32-a0a3-4725-9dd8-1438037bd9ce\") " pod="kube-system/kube-proxy-rhmr8" May 13 04:21:30.043765 systemd[1]: Created slice kubepods-besteffort-pod4a76264a_d16a_4684_868c_fa2cd61ba504.slice - libcontainer container kubepods-besteffort-pod4a76264a_d16a_4684_868c_fa2cd61ba504.slice. May 13 04:21:30.121393 kubelet[2587]: I0513 04:21:30.121353 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a76264a-d16a-4684-868c-fa2cd61ba504-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-2cdds\" (UID: \"4a76264a-d16a-4684-868c-fa2cd61ba504\") " pod="tigera-operator/tigera-operator-6f6897fdc5-2cdds" May 13 04:21:30.121725 kubelet[2587]: I0513 04:21:30.121404 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86w2q\" (UniqueName: \"kubernetes.io/projected/4a76264a-d16a-4684-868c-fa2cd61ba504-kube-api-access-86w2q\") pod \"tigera-operator-6f6897fdc5-2cdds\" (UID: \"4a76264a-d16a-4684-868c-fa2cd61ba504\") " pod="tigera-operator/tigera-operator-6f6897fdc5-2cdds" May 13 04:21:30.348132 containerd[1461]: time="2025-05-13T04:21:30.348091043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-2cdds,Uid:4a76264a-d16a-4684-868c-fa2cd61ba504,Namespace:tigera-operator,Attempt:0,}" May 13 04:21:30.380582 containerd[1461]: time="2025-05-13T04:21:30.380481640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:30.380924 containerd[1461]: time="2025-05-13T04:21:30.380576699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:30.380924 containerd[1461]: time="2025-05-13T04:21:30.380602719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:30.380924 containerd[1461]: time="2025-05-13T04:21:30.380741426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:30.402103 systemd[1]: Started cri-containerd-c72d7ea4455a5d3b5fdc938179f84a61726db17d9c7337a255c4fc534b97f003.scope - libcontainer container c72d7ea4455a5d3b5fdc938179f84a61726db17d9c7337a255c4fc534b97f003. May 13 04:21:30.442357 containerd[1461]: time="2025-05-13T04:21:30.442286532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-2cdds,Uid:4a76264a-d16a-4684-868c-fa2cd61ba504,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c72d7ea4455a5d3b5fdc938179f84a61726db17d9c7337a255c4fc534b97f003\"" May 13 04:21:30.447194 containerd[1461]: time="2025-05-13T04:21:30.447138532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 04:21:30.619737 kubelet[2587]: E0513 04:21:30.619130 2587 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 13 04:21:30.619737 kubelet[2587]: E0513 04:21:30.619292 2587 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-proxy podName:669e0d32-a0a3-4725-9dd8-1438037bd9ce nodeName:}" failed. No retries permitted until 2025-05-13 04:21:31.11923685 +0000 UTC m=+5.583412280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-proxy") pod "kube-proxy-rhmr8" (UID: "669e0d32-a0a3-4725-9dd8-1438037bd9ce") : failed to sync configmap cache: timed out waiting for the condition May 13 04:21:30.663153 kubelet[2587]: E0513 04:21:30.662990 2587 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 04:21:30.663153 kubelet[2587]: E0513 04:21:30.663044 2587 projected.go:194] Error preparing data for projected volume kube-api-access-88j95 for pod kube-system/kube-proxy-rhmr8: failed to sync configmap cache: timed out waiting for the condition May 13 04:21:30.663153 kubelet[2587]: E0513 04:21:30.663138 2587 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-api-access-88j95 podName:669e0d32-a0a3-4725-9dd8-1438037bd9ce nodeName:}" failed. No retries permitted until 2025-05-13 04:21:31.163106469 +0000 UTC m=+5.627281899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88j95" (UniqueName: "kubernetes.io/projected/669e0d32-a0a3-4725-9dd8-1438037bd9ce-kube-api-access-88j95") pod "kube-proxy-rhmr8" (UID: "669e0d32-a0a3-4725-9dd8-1438037bd9ce") : failed to sync configmap cache: timed out waiting for the condition May 13 04:21:31.304079 containerd[1461]: time="2025-05-13T04:21:31.303951792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhmr8,Uid:669e0d32-a0a3-4725-9dd8-1438037bd9ce,Namespace:kube-system,Attempt:0,}" May 13 04:21:31.361466 containerd[1461]: time="2025-05-13T04:21:31.361133638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:31.362187 containerd[1461]: time="2025-05-13T04:21:31.361376902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:31.362187 containerd[1461]: time="2025-05-13T04:21:31.362120547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:31.362453 containerd[1461]: time="2025-05-13T04:21:31.362347075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:31.407247 systemd[1]: Started cri-containerd-a4a52b337efafc1cd95ae1e1378445cb8592836200bd9e9870556659857cabbb.scope - libcontainer container a4a52b337efafc1cd95ae1e1378445cb8592836200bd9e9870556659857cabbb. May 13 04:21:31.436950 containerd[1461]: time="2025-05-13T04:21:31.436906176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhmr8,Uid:669e0d32-a0a3-4725-9dd8-1438037bd9ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4a52b337efafc1cd95ae1e1378445cb8592836200bd9e9870556659857cabbb\"" May 13 04:21:31.440706 containerd[1461]: time="2025-05-13T04:21:31.440518023Z" level=info msg="CreateContainer within sandbox \"a4a52b337efafc1cd95ae1e1378445cb8592836200bd9e9870556659857cabbb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 04:21:31.471811 containerd[1461]: time="2025-05-13T04:21:31.471704587Z" level=info msg="CreateContainer within sandbox \"a4a52b337efafc1cd95ae1e1378445cb8592836200bd9e9870556659857cabbb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3e2403ce213a9fba99d9ed3ca0814f72b4bfefd8a36d93365f77ca6bd77e34f\"" May 13 04:21:31.473432 containerd[1461]: time="2025-05-13T04:21:31.472409550Z" level=info msg="StartContainer for \"a3e2403ce213a9fba99d9ed3ca0814f72b4bfefd8a36d93365f77ca6bd77e34f\"" May 13 04:21:31.498098 systemd[1]: Started cri-containerd-a3e2403ce213a9fba99d9ed3ca0814f72b4bfefd8a36d93365f77ca6bd77e34f.scope - libcontainer container a3e2403ce213a9fba99d9ed3ca0814f72b4bfefd8a36d93365f77ca6bd77e34f. May 13 04:21:31.530221 containerd[1461]: time="2025-05-13T04:21:31.530175478Z" level=info msg="StartContainer for \"a3e2403ce213a9fba99d9ed3ca0814f72b4bfefd8a36d93365f77ca6bd77e34f\" returns successfully" May 13 04:21:31.759074 kubelet[2587]: I0513 04:21:31.758567 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhmr8" podStartSLOduration=2.7585333370000003 podStartE2EDuration="2.758533337s" podCreationTimestamp="2025-05-13 04:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:21:31.758273717 +0000 UTC m=+6.222449107" watchObservedRunningTime="2025-05-13 04:21:31.758533337 +0000 UTC m=+6.222708767" May 13 04:21:32.558919 sudo[1713]: pam_unix(sudo:session): session closed for user root May 13 04:21:32.829727 sshd[1710]: pam_unix(sshd:session): session closed for user core May 13 04:21:32.834825 systemd[1]: sshd@8-172.24.4.57:22-172.24.4.1:56816.service: Deactivated successfully. May 13 04:21:32.838425 systemd[1]: session-11.scope: Deactivated successfully. May 13 04:21:32.838682 systemd[1]: session-11.scope: Consumed 6.759s CPU time, 159.2M memory peak, 0B memory swap peak. May 13 04:21:32.839848 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. May 13 04:21:32.841496 systemd-logind[1439]: Removed session 11. May 13 04:21:32.980909 containerd[1461]: time="2025-05-13T04:21:32.980709278Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:32.981986 containerd[1461]: time="2025-05-13T04:21:32.981902736Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 04:21:32.983262 containerd[1461]: time="2025-05-13T04:21:32.983218900Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:32.986039 containerd[1461]: time="2025-05-13T04:21:32.985952289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:32.987279 containerd[1461]: time="2025-05-13T04:21:32.986692171Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.539519134s" May 13 04:21:32.987279 containerd[1461]: time="2025-05-13T04:21:32.986723852Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 04:21:32.989604 containerd[1461]: time="2025-05-13T04:21:32.989580348Z" level=info msg="CreateContainer within sandbox \"c72d7ea4455a5d3b5fdc938179f84a61726db17d9c7337a255c4fc534b97f003\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 04:21:33.007471 containerd[1461]: time="2025-05-13T04:21:33.007420877Z" level=info msg="CreateContainer within sandbox \"c72d7ea4455a5d3b5fdc938179f84a61726db17d9c7337a255c4fc534b97f003\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14\"" May 13 04:21:33.008612 containerd[1461]: time="2025-05-13T04:21:33.008275577Z" level=info msg="StartContainer for \"43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14\"" May 13 04:21:33.039770 systemd[1]: run-containerd-runc-k8s.io-43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14-runc.Bbm9HW.mount: Deactivated successfully. May 13 04:21:33.050102 systemd[1]: Started cri-containerd-43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14.scope - libcontainer container 43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14. May 13 04:21:33.078490 containerd[1461]: time="2025-05-13T04:21:33.078435560Z" level=info msg="StartContainer for \"43fa797d4ad3e59030a49ecd4875c0831fcfb6906ed46cdb8ee93ab192a79d14\" returns successfully" May 13 04:21:33.767560 kubelet[2587]: I0513 04:21:33.766664 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-2cdds" podStartSLOduration=2.223518584 podStartE2EDuration="4.766631672s" podCreationTimestamp="2025-05-13 04:21:29 +0000 UTC" firstStartedPulling="2025-05-13 04:21:30.44426537 +0000 UTC m=+4.908440800" lastFinishedPulling="2025-05-13 04:21:32.987378508 +0000 UTC m=+7.451553888" observedRunningTime="2025-05-13 04:21:33.766403032 +0000 UTC m=+8.230578472" watchObservedRunningTime="2025-05-13 04:21:33.766631672 +0000 UTC m=+8.230807112" May 13 04:21:36.402075 systemd[1]: Created slice kubepods-besteffort-pod8e4207ac_2b75_4179_aa7a_aa47b753694f.slice - libcontainer container kubepods-besteffort-pod8e4207ac_2b75_4179_aa7a_aa47b753694f.slice. May 13 04:21:36.464839 kubelet[2587]: I0513 04:21:36.464791 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e4207ac-2b75-4179-aa7a-aa47b753694f-typha-certs\") pod \"calico-typha-7ffb795697-j9tvv\" (UID: \"8e4207ac-2b75-4179-aa7a-aa47b753694f\") " pod="calico-system/calico-typha-7ffb795697-j9tvv" May 13 04:21:36.464839 kubelet[2587]: I0513 04:21:36.464840 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhsd\" (UniqueName: \"kubernetes.io/projected/8e4207ac-2b75-4179-aa7a-aa47b753694f-kube-api-access-rdhsd\") pod \"calico-typha-7ffb795697-j9tvv\" (UID: \"8e4207ac-2b75-4179-aa7a-aa47b753694f\") " pod="calico-system/calico-typha-7ffb795697-j9tvv" May 13 04:21:36.465273 kubelet[2587]: I0513 04:21:36.464867 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e4207ac-2b75-4179-aa7a-aa47b753694f-tigera-ca-bundle\") pod \"calico-typha-7ffb795697-j9tvv\" (UID: \"8e4207ac-2b75-4179-aa7a-aa47b753694f\") " pod="calico-system/calico-typha-7ffb795697-j9tvv" May 13 04:21:36.509986 systemd[1]: Created slice kubepods-besteffort-pod3389558a_9f7c_4c71_b132_240644eb318b.slice - libcontainer container kubepods-besteffort-pod3389558a_9f7c_4c71_b132_240644eb318b.slice. May 13 04:21:36.565703 kubelet[2587]: I0513 04:21:36.565662 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-cni-log-dir\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565703 kubelet[2587]: I0513 04:21:36.565726 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-lib-modules\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565894 kubelet[2587]: I0513 04:21:36.565751 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3389558a-9f7c-4c71-b132-240644eb318b-tigera-ca-bundle\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565894 kubelet[2587]: I0513 04:21:36.565772 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3389558a-9f7c-4c71-b132-240644eb318b-node-certs\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565894 kubelet[2587]: I0513 04:21:36.565794 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-var-run-calico\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565894 kubelet[2587]: I0513 04:21:36.565815 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-flexvol-driver-host\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.565894 kubelet[2587]: I0513 04:21:36.565869 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-cni-bin-dir\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.566471 kubelet[2587]: I0513 04:21:36.565889 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-policysync\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.566471 kubelet[2587]: I0513 04:21:36.565910 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-xtables-lock\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.566471 kubelet[2587]: I0513 04:21:36.565931 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-var-lib-calico\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.566471 kubelet[2587]: I0513 04:21:36.565952 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3389558a-9f7c-4c71-b132-240644eb318b-cni-net-dir\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.566471 kubelet[2587]: I0513 04:21:36.566000 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6ww\" (UniqueName: \"kubernetes.io/projected/3389558a-9f7c-4c71-b132-240644eb318b-kube-api-access-fp6ww\") pod \"calico-node-wfkck\" (UID: \"3389558a-9f7c-4c71-b132-240644eb318b\") " pod="calico-system/calico-node-wfkck" May 13 04:21:36.624666 kubelet[2587]: E0513 04:21:36.624357 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:36.668658 kubelet[2587]: I0513 04:21:36.666532 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b475fce-0503-4d3c-9f11-a776dc4b6dcc-socket-dir\") pod \"csi-node-driver-zcwgb\" (UID: \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\") " pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:36.668658 kubelet[2587]: I0513 04:21:36.666585 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2lzq\" (UniqueName: \"kubernetes.io/projected/6b475fce-0503-4d3c-9f11-a776dc4b6dcc-kube-api-access-p2lzq\") pod \"csi-node-driver-zcwgb\" (UID: \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\") " pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:36.668658 kubelet[2587]: I0513 04:21:36.666665 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b475fce-0503-4d3c-9f11-a776dc4b6dcc-registration-dir\") pod \"csi-node-driver-zcwgb\" (UID: \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\") " pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:36.668658 kubelet[2587]: I0513 04:21:36.666686 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b475fce-0503-4d3c-9f11-a776dc4b6dcc-kubelet-dir\") pod \"csi-node-driver-zcwgb\" (UID: \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\") " pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:36.668658 kubelet[2587]: I0513 04:21:36.666725 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b475fce-0503-4d3c-9f11-a776dc4b6dcc-varrun\") pod \"csi-node-driver-zcwgb\" (UID: \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\") " pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:36.669208 kubelet[2587]: E0513 04:21:36.669082 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.669298 kubelet[2587]: W0513 04:21:36.669270 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.671032 kubelet[2587]: E0513 04:21:36.671015 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.673158 kubelet[2587]: E0513 04:21:36.673142 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.676021 kubelet[2587]: W0513 04:21:36.675997 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.676154 kubelet[2587]: E0513 04:21:36.676139 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.677486 kubelet[2587]: E0513 04:21:36.677469 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.677631 kubelet[2587]: W0513 04:21:36.677557 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.677631 kubelet[2587]: E0513 04:21:36.677583 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.677901 kubelet[2587]: E0513 04:21:36.677873 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.677901 kubelet[2587]: W0513 04:21:36.677885 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.678166 kubelet[2587]: E0513 04:21:36.678112 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.678285 kubelet[2587]: E0513 04:21:36.678274 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.678492 kubelet[2587]: W0513 04:21:36.678340 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.678492 kubelet[2587]: E0513 04:21:36.678380 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.678985 kubelet[2587]: E0513 04:21:36.678903 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.678985 kubelet[2587]: W0513 04:21:36.678915 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.678985 kubelet[2587]: E0513 04:21:36.678944 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.680094 kubelet[2587]: E0513 04:21:36.680028 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.680094 kubelet[2587]: W0513 04:21:36.680043 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.680094 kubelet[2587]: E0513 04:21:36.680073 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.680585 kubelet[2587]: E0513 04:21:36.680369 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.680585 kubelet[2587]: W0513 04:21:36.680382 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.680585 kubelet[2587]: E0513 04:21:36.680401 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.681247 kubelet[2587]: E0513 04:21:36.681179 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.681247 kubelet[2587]: W0513 04:21:36.681191 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.681247 kubelet[2587]: E0513 04:21:36.681208 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.681654 kubelet[2587]: E0513 04:21:36.681420 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.681654 kubelet[2587]: W0513 04:21:36.681437 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.681654 kubelet[2587]: E0513 04:21:36.681452 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.682356 kubelet[2587]: E0513 04:21:36.681659 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.682356 kubelet[2587]: W0513 04:21:36.681668 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.682356 kubelet[2587]: E0513 04:21:36.681677 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.699099 kubelet[2587]: E0513 04:21:36.699023 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.699099 kubelet[2587]: W0513 04:21:36.699046 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.699099 kubelet[2587]: E0513 04:21:36.699066 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.711238 containerd[1461]: time="2025-05-13T04:21:36.710360170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffb795697-j9tvv,Uid:8e4207ac-2b75-4179-aa7a-aa47b753694f,Namespace:calico-system,Attempt:0,}" May 13 04:21:36.757634 containerd[1461]: time="2025-05-13T04:21:36.756736373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:36.757634 containerd[1461]: time="2025-05-13T04:21:36.756808183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:36.757634 containerd[1461]: time="2025-05-13T04:21:36.756826634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:36.757634 containerd[1461]: time="2025-05-13T04:21:36.756902129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:36.767922 kubelet[2587]: E0513 04:21:36.767695 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.767922 kubelet[2587]: W0513 04:21:36.767717 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.767922 kubelet[2587]: E0513 04:21:36.767737 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.768652 kubelet[2587]: E0513 04:21:36.768635 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.768652 kubelet[2587]: W0513 04:21:36.768649 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.768848 kubelet[2587]: E0513 04:21:36.768663 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.769246 kubelet[2587]: E0513 04:21:36.768943 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.769246 kubelet[2587]: W0513 04:21:36.768993 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.769246 kubelet[2587]: E0513 04:21:36.769011 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.769759 kubelet[2587]: E0513 04:21:36.769617 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.769999 kubelet[2587]: W0513 04:21:36.769909 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.770319 kubelet[2587]: E0513 04:21:36.769944 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.770319 kubelet[2587]: E0513 04:21:36.770156 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.770319 kubelet[2587]: W0513 04:21:36.770166 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.770319 kubelet[2587]: E0513 04:21:36.770184 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.770434 kubelet[2587]: E0513 04:21:36.770354 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.770434 kubelet[2587]: W0513 04:21:36.770363 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.770482 kubelet[2587]: E0513 04:21:36.770431 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.770708 kubelet[2587]: E0513 04:21:36.770534 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.770708 kubelet[2587]: W0513 04:21:36.770546 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.770708 kubelet[2587]: E0513 04:21:36.770626 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.771083 kubelet[2587]: E0513 04:21:36.770941 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.771083 kubelet[2587]: W0513 04:21:36.770977 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.771083 kubelet[2587]: E0513 04:21:36.771004 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.772313 kubelet[2587]: E0513 04:21:36.772138 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.772313 kubelet[2587]: W0513 04:21:36.772162 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.772313 kubelet[2587]: E0513 04:21:36.772197 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.772586 kubelet[2587]: E0513 04:21:36.772382 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.772586 kubelet[2587]: W0513 04:21:36.772391 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.772586 kubelet[2587]: E0513 04:21:36.772443 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.772825 kubelet[2587]: E0513 04:21:36.772604 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.772825 kubelet[2587]: W0513 04:21:36.772612 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.772825 kubelet[2587]: E0513 04:21:36.772697 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.772825 kubelet[2587]: E0513 04:21:36.772823 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.774001 kubelet[2587]: W0513 04:21:36.772832 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.772871 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.773014 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.774001 kubelet[2587]: W0513 04:21:36.773024 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.773468 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.773583 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.774001 kubelet[2587]: W0513 04:21:36.773591 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.773827 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.774001 kubelet[2587]: E0513 04:21:36.773983 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.774001 kubelet[2587]: W0513 04:21:36.773992 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.774395 kubelet[2587]: E0513 04:21:36.774375 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.774618 kubelet[2587]: E0513 04:21:36.774602 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.774618 kubelet[2587]: W0513 04:21:36.774615 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.775168 kubelet[2587]: E0513 04:21:36.775147 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.775365 kubelet[2587]: E0513 04:21:36.775260 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.775365 kubelet[2587]: W0513 04:21:36.775286 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.775365 kubelet[2587]: E0513 04:21:36.775334 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.775607 kubelet[2587]: E0513 04:21:36.775435 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.775607 kubelet[2587]: W0513 04:21:36.775444 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.776142 kubelet[2587]: E0513 04:21:36.776107 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.776444 kubelet[2587]: E0513 04:21:36.776315 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.776444 kubelet[2587]: W0513 04:21:36.776327 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.776617 kubelet[2587]: E0513 04:21:36.776456 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.776617 kubelet[2587]: E0513 04:21:36.776582 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.776617 kubelet[2587]: W0513 04:21:36.776590 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.776617 kubelet[2587]: E0513 04:21:36.776601 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.777195 kubelet[2587]: E0513 04:21:36.777170 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.777195 kubelet[2587]: W0513 04:21:36.777183 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.777279 kubelet[2587]: E0513 04:21:36.777205 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.778073 kubelet[2587]: E0513 04:21:36.777848 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.778073 kubelet[2587]: W0513 04:21:36.777862 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.778073 kubelet[2587]: E0513 04:21:36.777912 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.778073 kubelet[2587]: E0513 04:21:36.778042 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.778073 kubelet[2587]: W0513 04:21:36.778051 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.778073 kubelet[2587]: E0513 04:21:36.778074 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.779838 kubelet[2587]: E0513 04:21:36.779105 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.779838 kubelet[2587]: W0513 04:21:36.779115 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.779838 kubelet[2587]: E0513 04:21:36.779125 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.779838 kubelet[2587]: E0513 04:21:36.779288 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.779838 kubelet[2587]: W0513 04:21:36.779296 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.779838 kubelet[2587]: E0513 04:21:36.779305 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.792156 systemd[1]: Started cri-containerd-62bf6f010bff3a31a7f05c1bc24ee7204b5f08b993dbf45534c4a75b88328559.scope - libcontainer container 62bf6f010bff3a31a7f05c1bc24ee7204b5f08b993dbf45534c4a75b88328559. May 13 04:21:36.803846 kubelet[2587]: E0513 04:21:36.803545 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:36.803846 kubelet[2587]: W0513 04:21:36.803639 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:36.804714 kubelet[2587]: E0513 04:21:36.803663 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:36.816297 containerd[1461]: time="2025-05-13T04:21:36.816257848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfkck,Uid:3389558a-9f7c-4c71-b132-240644eb318b,Namespace:calico-system,Attempt:0,}" May 13 04:21:36.856706 containerd[1461]: time="2025-05-13T04:21:36.855865857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:21:36.856706 containerd[1461]: time="2025-05-13T04:21:36.855942104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:21:36.856706 containerd[1461]: time="2025-05-13T04:21:36.855974317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:36.856706 containerd[1461]: time="2025-05-13T04:21:36.856060139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:21:36.859988 containerd[1461]: time="2025-05-13T04:21:36.859815439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7ffb795697-j9tvv,Uid:8e4207ac-2b75-4179-aa7a-aa47b753694f,Namespace:calico-system,Attempt:0,} returns sandbox id \"62bf6f010bff3a31a7f05c1bc24ee7204b5f08b993dbf45534c4a75b88328559\"" May 13 04:21:36.865710 containerd[1461]: time="2025-05-13T04:21:36.865635148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 04:21:36.880270 systemd[1]: Started cri-containerd-d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb.scope - libcontainer container d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb. May 13 04:21:36.918055 containerd[1461]: time="2025-05-13T04:21:36.918003377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfkck,Uid:3389558a-9f7c-4c71-b132-240644eb318b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\"" May 13 04:21:37.459574 kubelet[2587]: E0513 04:21:37.459499 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.459574 kubelet[2587]: W0513 04:21:37.459521 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.459921 kubelet[2587]: E0513 04:21:37.459658 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.460574 kubelet[2587]: E0513 04:21:37.460548 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.460574 kubelet[2587]: W0513 04:21:37.460562 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.460574 kubelet[2587]: E0513 04:21:37.460574 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.462416 kubelet[2587]: E0513 04:21:37.462385 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.462416 kubelet[2587]: W0513 04:21:37.462402 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.462416 kubelet[2587]: E0513 04:21:37.462415 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.462805 kubelet[2587]: E0513 04:21:37.462780 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.462805 kubelet[2587]: W0513 04:21:37.462793 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.462805 kubelet[2587]: E0513 04:21:37.462804 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.463535 kubelet[2587]: E0513 04:21:37.463505 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.463535 kubelet[2587]: W0513 04:21:37.463519 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.463535 kubelet[2587]: E0513 04:21:37.463529 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.464589 kubelet[2587]: E0513 04:21:37.464388 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.464589 kubelet[2587]: W0513 04:21:37.464406 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.464589 kubelet[2587]: E0513 04:21:37.464418 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.465035 kubelet[2587]: E0513 04:21:37.465016 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.465035 kubelet[2587]: W0513 04:21:37.465030 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.465393 kubelet[2587]: E0513 04:21:37.465041 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.465393 kubelet[2587]: E0513 04:21:37.465268 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.465393 kubelet[2587]: W0513 04:21:37.465277 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.465666 kubelet[2587]: E0513 04:21:37.465633 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.465904 kubelet[2587]: E0513 04:21:37.465889 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.465904 kubelet[2587]: W0513 04:21:37.465902 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.466050 kubelet[2587]: E0513 04:21:37.465912 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.466389 kubelet[2587]: E0513 04:21:37.466375 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.466389 kubelet[2587]: W0513 04:21:37.466388 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.466488 kubelet[2587]: E0513 04:21:37.466398 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.466583 kubelet[2587]: E0513 04:21:37.466570 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.466583 kubelet[2587]: W0513 04:21:37.466581 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.466670 kubelet[2587]: E0513 04:21:37.466608 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.466785 kubelet[2587]: E0513 04:21:37.466759 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.466785 kubelet[2587]: W0513 04:21:37.466782 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.466865 kubelet[2587]: E0513 04:21:37.466791 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.466991 kubelet[2587]: E0513 04:21:37.466968 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.466991 kubelet[2587]: W0513 04:21:37.466980 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.466991 kubelet[2587]: E0513 04:21:37.466989 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.467185 kubelet[2587]: E0513 04:21:37.467156 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.467185 kubelet[2587]: W0513 04:21:37.467164 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.467296 kubelet[2587]: E0513 04:21:37.467173 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.467483 kubelet[2587]: E0513 04:21:37.467469 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.467483 kubelet[2587]: W0513 04:21:37.467481 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.467550 kubelet[2587]: E0513 04:21:37.467491 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.772091 kubelet[2587]: E0513 04:21:37.771859 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.772091 kubelet[2587]: W0513 04:21:37.771899 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.772091 kubelet[2587]: E0513 04:21:37.771932 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.774580 kubelet[2587]: E0513 04:21:37.774537 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.774580 kubelet[2587]: W0513 04:21:37.774572 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.774775 kubelet[2587]: E0513 04:21:37.774596 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.776036 kubelet[2587]: E0513 04:21:37.775762 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.776036 kubelet[2587]: W0513 04:21:37.775798 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.776036 kubelet[2587]: E0513 04:21:37.775826 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.777811 kubelet[2587]: E0513 04:21:37.776426 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.777811 kubelet[2587]: W0513 04:21:37.776447 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.777811 kubelet[2587]: E0513 04:21:37.776513 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.778384 kubelet[2587]: E0513 04:21:37.778126 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.778384 kubelet[2587]: W0513 04:21:37.778156 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.778384 kubelet[2587]: E0513 04:21:37.778178 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.779268 kubelet[2587]: E0513 04:21:37.778766 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.779268 kubelet[2587]: W0513 04:21:37.778786 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.779268 kubelet[2587]: E0513 04:21:37.778809 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.779489 kubelet[2587]: E0513 04:21:37.779315 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.779489 kubelet[2587]: W0513 04:21:37.779336 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.779489 kubelet[2587]: E0513 04:21:37.779356 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.780938 kubelet[2587]: E0513 04:21:37.780735 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.780938 kubelet[2587]: W0513 04:21:37.780781 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.780938 kubelet[2587]: E0513 04:21:37.780830 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.782106 kubelet[2587]: E0513 04:21:37.781820 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.782106 kubelet[2587]: W0513 04:21:37.781845 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.782106 kubelet[2587]: E0513 04:21:37.781869 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.783487 kubelet[2587]: E0513 04:21:37.783429 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.783487 kubelet[2587]: W0513 04:21:37.783485 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.783693 kubelet[2587]: E0513 04:21:37.783518 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.784748 kubelet[2587]: E0513 04:21:37.784678 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.784748 kubelet[2587]: W0513 04:21:37.784711 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.784748 kubelet[2587]: E0513 04:21:37.784736 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.785717 kubelet[2587]: E0513 04:21:37.785654 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.786251 kubelet[2587]: W0513 04:21:37.785723 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.786251 kubelet[2587]: E0513 04:21:37.785751 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.786251 kubelet[2587]: E0513 04:21:37.786179 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.786251 kubelet[2587]: W0513 04:21:37.786199 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.786251 kubelet[2587]: E0513 04:21:37.786221 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.787665 kubelet[2587]: E0513 04:21:37.787008 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.787665 kubelet[2587]: W0513 04:21:37.787035 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.787665 kubelet[2587]: E0513 04:21:37.787059 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:37.788284 kubelet[2587]: E0513 04:21:37.788228 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:37.788284 kubelet[2587]: W0513 04:21:37.788258 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:37.788859 kubelet[2587]: E0513 04:21:37.788286 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:38.687480 kubelet[2587]: E0513 04:21:38.686671 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:40.087721 containerd[1461]: time="2025-05-13T04:21:40.087492755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:40.088814 containerd[1461]: time="2025-05-13T04:21:40.088767434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 04:21:40.090044 containerd[1461]: time="2025-05-13T04:21:40.090000523Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:40.093158 containerd[1461]: time="2025-05-13T04:21:40.093114508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:40.093786 containerd[1461]: time="2025-05-13T04:21:40.093745808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.228071615s" May 13 04:21:40.093828 containerd[1461]: time="2025-05-13T04:21:40.093785376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 04:21:40.095576 containerd[1461]: time="2025-05-13T04:21:40.095556535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 04:21:40.110465 containerd[1461]: time="2025-05-13T04:21:40.110390163Z" level=info msg="CreateContainer within sandbox \"62bf6f010bff3a31a7f05c1bc24ee7204b5f08b993dbf45534c4a75b88328559\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 04:21:40.137935 containerd[1461]: time="2025-05-13T04:21:40.137898367Z" level=info msg="CreateContainer within sandbox \"62bf6f010bff3a31a7f05c1bc24ee7204b5f08b993dbf45534c4a75b88328559\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fa93dd90780fbc17762f80e376632f35164f4489df45870d888bedba8f6d69a8\"" May 13 04:21:40.139905 containerd[1461]: time="2025-05-13T04:21:40.138757617Z" level=info msg="StartContainer for \"fa93dd90780fbc17762f80e376632f35164f4489df45870d888bedba8f6d69a8\"" May 13 04:21:40.168103 systemd[1]: Started cri-containerd-fa93dd90780fbc17762f80e376632f35164f4489df45870d888bedba8f6d69a8.scope - libcontainer container fa93dd90780fbc17762f80e376632f35164f4489df45870d888bedba8f6d69a8. May 13 04:21:40.228739 containerd[1461]: time="2025-05-13T04:21:40.228697311Z" level=info msg="StartContainer for \"fa93dd90780fbc17762f80e376632f35164f4489df45870d888bedba8f6d69a8\" returns successfully" May 13 04:21:40.686503 kubelet[2587]: E0513 04:21:40.686428 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:40.808636 kubelet[2587]: E0513 04:21:40.808594 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.808731 kubelet[2587]: W0513 04:21:40.808638 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.808780 kubelet[2587]: E0513 04:21:40.808720 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.809203 kubelet[2587]: E0513 04:21:40.809179 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.809254 kubelet[2587]: W0513 04:21:40.809207 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.809254 kubelet[2587]: E0513 04:21:40.809233 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.810429 kubelet[2587]: E0513 04:21:40.810227 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.810429 kubelet[2587]: W0513 04:21:40.810354 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.810429 kubelet[2587]: E0513 04:21:40.810380 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.811079 kubelet[2587]: E0513 04:21:40.810998 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.811079 kubelet[2587]: W0513 04:21:40.811026 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.811079 kubelet[2587]: E0513 04:21:40.811044 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.812042 kubelet[2587]: E0513 04:21:40.812029 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.812226 kubelet[2587]: W0513 04:21:40.812118 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.812226 kubelet[2587]: E0513 04:21:40.812135 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.812617 kubelet[2587]: E0513 04:21:40.812605 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.813444 kubelet[2587]: W0513 04:21:40.813367 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.813444 kubelet[2587]: E0513 04:21:40.813384 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.813861 kubelet[2587]: E0513 04:21:40.813752 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.813861 kubelet[2587]: W0513 04:21:40.813773 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.813861 kubelet[2587]: E0513 04:21:40.813784 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.814198 kubelet[2587]: E0513 04:21:40.814187 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.814477 kubelet[2587]: W0513 04:21:40.814313 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.814477 kubelet[2587]: E0513 04:21:40.814326 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.815346 kubelet[2587]: E0513 04:21:40.815300 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.815346 kubelet[2587]: W0513 04:21:40.815312 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.815767 kubelet[2587]: E0513 04:21:40.815322 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.816131 kubelet[2587]: E0513 04:21:40.816095 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.816131 kubelet[2587]: W0513 04:21:40.816106 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.816303 kubelet[2587]: E0513 04:21:40.816116 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.816986 kubelet[2587]: E0513 04:21:40.816728 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.816986 kubelet[2587]: W0513 04:21:40.816755 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.816986 kubelet[2587]: E0513 04:21:40.816769 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.817479 kubelet[2587]: E0513 04:21:40.817362 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.817479 kubelet[2587]: W0513 04:21:40.817373 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.817479 kubelet[2587]: E0513 04:21:40.817383 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.818341 kubelet[2587]: E0513 04:21:40.818244 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.818341 kubelet[2587]: W0513 04:21:40.818255 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.818341 kubelet[2587]: E0513 04:21:40.818264 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.818614 kubelet[2587]: E0513 04:21:40.818520 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.818614 kubelet[2587]: W0513 04:21:40.818532 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.818614 kubelet[2587]: E0513 04:21:40.818542 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.818795 kubelet[2587]: E0513 04:21:40.818785 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.818930 kubelet[2587]: W0513 04:21:40.818840 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.818930 kubelet[2587]: E0513 04:21:40.818853 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.909683 kubelet[2587]: E0513 04:21:40.909606 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.910304 kubelet[2587]: W0513 04:21:40.909946 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.910304 kubelet[2587]: E0513 04:21:40.910048 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.911266 kubelet[2587]: E0513 04:21:40.911073 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.911266 kubelet[2587]: W0513 04:21:40.911116 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.911817 kubelet[2587]: E0513 04:21:40.911647 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.913518 kubelet[2587]: E0513 04:21:40.913320 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.913518 kubelet[2587]: W0513 04:21:40.913351 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.913518 kubelet[2587]: E0513 04:21:40.913378 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.915769 kubelet[2587]: E0513 04:21:40.915388 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.915769 kubelet[2587]: W0513 04:21:40.915419 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.916862 kubelet[2587]: E0513 04:21:40.916571 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.916862 kubelet[2587]: E0513 04:21:40.916412 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.916862 kubelet[2587]: W0513 04:21:40.916658 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.917427 kubelet[2587]: E0513 04:21:40.917007 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.918276 kubelet[2587]: E0513 04:21:40.917934 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.918276 kubelet[2587]: W0513 04:21:40.918006 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.919811 kubelet[2587]: E0513 04:21:40.919414 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.919811 kubelet[2587]: W0513 04:21:40.919442 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.921476 kubelet[2587]: E0513 04:21:40.921145 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.921476 kubelet[2587]: E0513 04:21:40.921234 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.921476 kubelet[2587]: W0513 04:21:40.921257 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.921476 kubelet[2587]: E0513 04:21:40.921281 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.922774 kubelet[2587]: E0513 04:21:40.921851 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.922774 kubelet[2587]: E0513 04:21:40.922507 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.922774 kubelet[2587]: W0513 04:21:40.922671 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.922774 kubelet[2587]: E0513 04:21:40.922705 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.924282 kubelet[2587]: E0513 04:21:40.923832 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.924282 kubelet[2587]: W0513 04:21:40.923914 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.924282 kubelet[2587]: E0513 04:21:40.923942 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.925326 kubelet[2587]: E0513 04:21:40.925016 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.925326 kubelet[2587]: W0513 04:21:40.925092 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.925943 kubelet[2587]: E0513 04:21:40.925563 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.926732 kubelet[2587]: E0513 04:21:40.926380 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.926732 kubelet[2587]: W0513 04:21:40.926643 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.927223 kubelet[2587]: E0513 04:21:40.927039 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.928088 kubelet[2587]: E0513 04:21:40.927847 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.928088 kubelet[2587]: W0513 04:21:40.927874 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.929255 kubelet[2587]: E0513 04:21:40.929227 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.929456 kubelet[2587]: W0513 04:21:40.929426 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.930467 kubelet[2587]: E0513 04:21:40.930438 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.930672 kubelet[2587]: W0513 04:21:40.930643 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.930872 kubelet[2587]: E0513 04:21:40.930844 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.931514 kubelet[2587]: E0513 04:21:40.930687 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.931514 kubelet[2587]: E0513 04:21:40.930664 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.933159 kubelet[2587]: E0513 04:21:40.932433 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.933159 kubelet[2587]: W0513 04:21:40.932454 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.933159 kubelet[2587]: E0513 04:21:40.932475 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.933159 kubelet[2587]: E0513 04:21:40.933060 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.933159 kubelet[2587]: W0513 04:21:40.933121 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.933810 kubelet[2587]: E0513 04:21:40.933163 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:40.936003 kubelet[2587]: E0513 04:21:40.935175 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:40.936003 kubelet[2587]: W0513 04:21:40.935198 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:40.936003 kubelet[2587]: E0513 04:21:40.935913 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.776557 kubelet[2587]: I0513 04:21:41.776506 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:21:41.827490 kubelet[2587]: E0513 04:21:41.826786 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.827490 kubelet[2587]: W0513 04:21:41.826823 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.827490 kubelet[2587]: E0513 04:21:41.826897 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.827490 kubelet[2587]: E0513 04:21:41.827291 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.827490 kubelet[2587]: W0513 04:21:41.827311 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.827490 kubelet[2587]: E0513 04:21:41.827333 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.828376 kubelet[2587]: E0513 04:21:41.827800 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.828376 kubelet[2587]: W0513 04:21:41.827824 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.828376 kubelet[2587]: E0513 04:21:41.827848 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.829024 kubelet[2587]: E0513 04:21:41.828769 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.829024 kubelet[2587]: W0513 04:21:41.828796 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.829024 kubelet[2587]: E0513 04:21:41.828819 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.829628 kubelet[2587]: E0513 04:21:41.829253 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.829628 kubelet[2587]: W0513 04:21:41.829274 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.829628 kubelet[2587]: E0513 04:21:41.829297 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.829942 kubelet[2587]: E0513 04:21:41.829913 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.830148 kubelet[2587]: W0513 04:21:41.830118 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.830285 kubelet[2587]: E0513 04:21:41.830259 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.831071 kubelet[2587]: E0513 04:21:41.830763 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.831071 kubelet[2587]: W0513 04:21:41.830791 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.831071 kubelet[2587]: E0513 04:21:41.830817 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.831455 kubelet[2587]: E0513 04:21:41.831427 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.831640 kubelet[2587]: W0513 04:21:41.831614 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.831786 kubelet[2587]: E0513 04:21:41.831761 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.832423 kubelet[2587]: E0513 04:21:41.832395 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.832765 kubelet[2587]: W0513 04:21:41.832561 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.832765 kubelet[2587]: E0513 04:21:41.832597 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.833198 kubelet[2587]: E0513 04:21:41.833170 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.833524 kubelet[2587]: W0513 04:21:41.833331 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.833524 kubelet[2587]: E0513 04:21:41.833365 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.833902 kubelet[2587]: E0513 04:21:41.833873 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.834363 kubelet[2587]: W0513 04:21:41.834071 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.834363 kubelet[2587]: E0513 04:21:41.834106 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.835034 kubelet[2587]: E0513 04:21:41.834751 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.835034 kubelet[2587]: W0513 04:21:41.834778 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.835034 kubelet[2587]: E0513 04:21:41.834800 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.835828 kubelet[2587]: E0513 04:21:41.835596 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.835828 kubelet[2587]: W0513 04:21:41.835622 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.835828 kubelet[2587]: E0513 04:21:41.835646 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.836593 kubelet[2587]: E0513 04:21:41.836378 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.836593 kubelet[2587]: W0513 04:21:41.836406 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.836593 kubelet[2587]: E0513 04:21:41.836429 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.837173 kubelet[2587]: E0513 04:21:41.837143 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.837428 kubelet[2587]: W0513 04:21:41.837314 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.837428 kubelet[2587]: E0513 04:21:41.837348 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.926734 kubelet[2587]: E0513 04:21:41.926352 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.926734 kubelet[2587]: W0513 04:21:41.926389 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.926734 kubelet[2587]: E0513 04:21:41.926468 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.928097 kubelet[2587]: E0513 04:21:41.927616 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.928097 kubelet[2587]: W0513 04:21:41.927644 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.928097 kubelet[2587]: E0513 04:21:41.927811 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.929479 kubelet[2587]: E0513 04:21:41.929187 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.929479 kubelet[2587]: W0513 04:21:41.929255 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.929479 kubelet[2587]: E0513 04:21:41.929289 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.930208 kubelet[2587]: E0513 04:21:41.930025 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.930208 kubelet[2587]: W0513 04:21:41.930052 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.930208 kubelet[2587]: E0513 04:21:41.930129 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.931259 kubelet[2587]: E0513 04:21:41.931007 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.931259 kubelet[2587]: W0513 04:21:41.931035 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.931259 kubelet[2587]: E0513 04:21:41.931177 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.932305 kubelet[2587]: E0513 04:21:41.931929 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.932305 kubelet[2587]: W0513 04:21:41.931955 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.932305 kubelet[2587]: E0513 04:21:41.932171 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.933907 kubelet[2587]: E0513 04:21:41.933725 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.933907 kubelet[2587]: W0513 04:21:41.933760 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.934406 kubelet[2587]: E0513 04:21:41.934134 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.935247 kubelet[2587]: E0513 04:21:41.935057 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.935247 kubelet[2587]: W0513 04:21:41.935108 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.936457 kubelet[2587]: E0513 04:21:41.936003 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.937163 kubelet[2587]: E0513 04:21:41.936931 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.937518 kubelet[2587]: W0513 04:21:41.937281 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.937947 kubelet[2587]: E0513 04:21:41.937760 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.938357 kubelet[2587]: E0513 04:21:41.938195 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.938357 kubelet[2587]: W0513 04:21:41.938220 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.938793 kubelet[2587]: E0513 04:21:41.938587 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.939218 kubelet[2587]: E0513 04:21:41.939052 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.939218 kubelet[2587]: W0513 04:21:41.939082 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.939621 kubelet[2587]: E0513 04:21:41.939419 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.939932 kubelet[2587]: E0513 04:21:41.939803 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.939932 kubelet[2587]: W0513 04:21:41.939829 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.940452 kubelet[2587]: E0513 04:21:41.940168 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.940693 kubelet[2587]: E0513 04:21:41.940617 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.940693 kubelet[2587]: W0513 04:21:41.940642 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.941291 kubelet[2587]: E0513 04:21:41.941090 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.941703 kubelet[2587]: E0513 04:21:41.941474 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.941703 kubelet[2587]: W0513 04:21:41.941499 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.941703 kubelet[2587]: E0513 04:21:41.941528 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.942388 kubelet[2587]: E0513 04:21:41.942239 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.942388 kubelet[2587]: W0513 04:21:41.942266 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.942910 kubelet[2587]: E0513 04:21:41.942662 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.945386 kubelet[2587]: E0513 04:21:41.944809 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.945386 kubelet[2587]: W0513 04:21:41.944839 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.945386 kubelet[2587]: E0513 04:21:41.944864 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.945852 kubelet[2587]: E0513 04:21:41.945825 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.946059 kubelet[2587]: W0513 04:21:41.946030 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.947055 kubelet[2587]: E0513 04:21:41.946194 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:41.947747 kubelet[2587]: E0513 04:21:41.947721 2587 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 04:21:41.947892 kubelet[2587]: W0513 04:21:41.947867 2587 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 04:21:41.948111 kubelet[2587]: E0513 04:21:41.948083 2587 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 04:21:42.178259 containerd[1461]: time="2025-05-13T04:21:42.178202825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:42.179636 containerd[1461]: time="2025-05-13T04:21:42.179449182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 04:21:42.182129 containerd[1461]: time="2025-05-13T04:21:42.180930565Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:42.184163 containerd[1461]: time="2025-05-13T04:21:42.183375275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:42.184163 containerd[1461]: time="2025-05-13T04:21:42.184044563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.08837835s" May 13 04:21:42.184163 containerd[1461]: time="2025-05-13T04:21:42.184081927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 04:21:42.186152 containerd[1461]: time="2025-05-13T04:21:42.186118992Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 04:21:42.209041 containerd[1461]: time="2025-05-13T04:21:42.209006332Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d\"" May 13 04:21:42.212356 containerd[1461]: time="2025-05-13T04:21:42.212332527Z" level=info msg="StartContainer for \"d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d\"" May 13 04:21:42.250536 systemd[1]: run-containerd-runc-k8s.io-d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d-runc.1WFd5r.mount: Deactivated successfully. May 13 04:21:42.258106 systemd[1]: Started cri-containerd-d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d.scope - libcontainer container d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d. May 13 04:21:42.292688 containerd[1461]: time="2025-05-13T04:21:42.292636781Z" level=info msg="StartContainer for \"d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d\" returns successfully" May 13 04:21:42.301550 systemd[1]: cri-containerd-d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d.scope: Deactivated successfully. May 13 04:21:42.686753 kubelet[2587]: E0513 04:21:42.686608 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:42.936245 kubelet[2587]: I0513 04:21:42.935753 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7ffb795697-j9tvv" podStartSLOduration=3.7048953190000002 podStartE2EDuration="6.935723859s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:21:36.864048202 +0000 UTC m=+11.328223582" lastFinishedPulling="2025-05-13 04:21:40.094876742 +0000 UTC m=+14.559052122" observedRunningTime="2025-05-13 04:21:40.790852546 +0000 UTC m=+15.255027926" watchObservedRunningTime="2025-05-13 04:21:42.935723859 +0000 UTC m=+17.399899289" May 13 04:21:42.978100 containerd[1461]: time="2025-05-13T04:21:42.977567448Z" level=info msg="shim disconnected" id=d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d namespace=k8s.io May 13 04:21:42.978100 containerd[1461]: time="2025-05-13T04:21:42.977680152Z" level=warning msg="cleaning up after shim disconnected" id=d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d namespace=k8s.io May 13 04:21:42.978100 containerd[1461]: time="2025-05-13T04:21:42.977702301Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 04:21:43.008119 containerd[1461]: time="2025-05-13T04:21:43.006803375Z" level=warning msg="cleanup warnings time=\"2025-05-13T04:21:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 04:21:43.202902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3a9e523486aac22944d1150af8311d6ede5ed1c23179bea0bd243a89f9a4b8d-rootfs.mount: Deactivated successfully. May 13 04:21:43.797433 containerd[1461]: time="2025-05-13T04:21:43.797340413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 04:21:44.686771 kubelet[2587]: E0513 04:21:44.686601 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:46.688598 kubelet[2587]: E0513 04:21:46.686752 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:48.687466 kubelet[2587]: E0513 04:21:48.687112 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:49.924753 containerd[1461]: time="2025-05-13T04:21:49.923813315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:49.925290 containerd[1461]: time="2025-05-13T04:21:49.925253443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 04:21:49.926537 containerd[1461]: time="2025-05-13T04:21:49.926489617Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:49.929788 containerd[1461]: time="2025-05-13T04:21:49.929759036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:21:49.931529 containerd[1461]: time="2025-05-13T04:21:49.931486636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.134063158s" May 13 04:21:49.931529 containerd[1461]: time="2025-05-13T04:21:49.931521198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 04:21:49.935620 containerd[1461]: time="2025-05-13T04:21:49.935199136Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 04:21:49.954715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679890304.mount: Deactivated successfully. May 13 04:21:49.965784 containerd[1461]: time="2025-05-13T04:21:49.965705850Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1\"" May 13 04:21:49.967779 containerd[1461]: time="2025-05-13T04:21:49.967720462Z" level=info msg="StartContainer for \"fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1\"" May 13 04:21:50.021106 systemd[1]: Started cri-containerd-fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1.scope - libcontainer container fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1. May 13 04:21:50.052527 containerd[1461]: time="2025-05-13T04:21:50.052451218Z" level=info msg="StartContainer for \"fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1\" returns successfully" May 13 04:21:50.687168 kubelet[2587]: E0513 04:21:50.687098 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:51.225265 containerd[1461]: time="2025-05-13T04:21:51.225217328Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 04:21:51.229483 systemd[1]: cri-containerd-fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1.scope: Deactivated successfully. May 13 04:21:51.265045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1-rootfs.mount: Deactivated successfully. May 13 04:21:51.275051 kubelet[2587]: I0513 04:21:51.274591 2587 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 04:21:51.620195 kubelet[2587]: W0513 04:21:51.620123 2587 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object May 13 04:21:51.620679 kubelet[2587]: E0513 04:21:51.620242 2587 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-3-n-3bdfb8ea63.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-3bdfb8ea63.novalocal' and this object" logger="UnhandledError" May 13 04:21:51.624672 systemd[1]: Created slice kubepods-burstable-pod5ec8c59e_3a9a_4a96_81ba_639baf60f4aa.slice - libcontainer container kubepods-burstable-pod5ec8c59e_3a9a_4a96_81ba_639baf60f4aa.slice. May 13 04:21:51.643749 systemd[1]: Created slice kubepods-burstable-pod180cf766_638d_4296_97c0_3a6bafc8c21a.slice - libcontainer container kubepods-burstable-pod180cf766_638d_4296_97c0_3a6bafc8c21a.slice. May 13 04:21:51.649052 systemd[1]: Created slice kubepods-besteffort-pod0559de8b_b04b_4f89_a57d_6943cdbc6076.slice - libcontainer container kubepods-besteffort-pod0559de8b_b04b_4f89_a57d_6943cdbc6076.slice. May 13 04:21:51.654752 systemd[1]: Created slice kubepods-besteffort-pod119e7f55_f555_420b_8c77_74d321630fd9.slice - libcontainer container kubepods-besteffort-pod119e7f55_f555_420b_8c77_74d321630fd9.slice. May 13 04:21:51.660302 systemd[1]: Created slice kubepods-besteffort-podca42f9a3_6ff2_4160_851b_5223b0b75593.slice - libcontainer container kubepods-besteffort-podca42f9a3_6ff2_4160_851b_5223b0b75593.slice. May 13 04:21:51.702405 kubelet[2587]: I0513 04:21:51.702375 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh7tz\" (UniqueName: \"kubernetes.io/projected/119e7f55-f555-420b-8c77-74d321630fd9-kube-api-access-rh7tz\") pod \"calico-kube-controllers-69fbd7ff65-7dt7j\" (UID: \"119e7f55-f555-420b-8c77-74d321630fd9\") " pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" May 13 04:21:51.702978 kubelet[2587]: I0513 04:21:51.702785 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnnd\" (UniqueName: \"kubernetes.io/projected/180cf766-638d-4296-97c0-3a6bafc8c21a-kube-api-access-tvnnd\") pod \"coredns-6f6b679f8f-wtbsq\" (UID: \"180cf766-638d-4296-97c0-3a6bafc8c21a\") " pod="kube-system/coredns-6f6b679f8f-wtbsq" May 13 04:21:51.702978 kubelet[2587]: I0513 04:21:51.702840 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ec8c59e-3a9a-4a96-81ba-639baf60f4aa-config-volume\") pod \"coredns-6f6b679f8f-69vql\" (UID: \"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa\") " pod="kube-system/coredns-6f6b679f8f-69vql" May 13 04:21:51.702978 kubelet[2587]: I0513 04:21:51.702864 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119e7f55-f555-420b-8c77-74d321630fd9-tigera-ca-bundle\") pod \"calico-kube-controllers-69fbd7ff65-7dt7j\" (UID: \"119e7f55-f555-420b-8c77-74d321630fd9\") " pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" May 13 04:21:51.702978 kubelet[2587]: I0513 04:21:51.702887 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwbk\" (UniqueName: \"kubernetes.io/projected/0559de8b-b04b-4f89-a57d-6943cdbc6076-kube-api-access-whwbk\") pod \"calico-apiserver-66cb4b4698-4s2rd\" (UID: \"0559de8b-b04b-4f89-a57d-6943cdbc6076\") " pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" May 13 04:21:51.702978 kubelet[2587]: I0513 04:21:51.702942 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/180cf766-638d-4296-97c0-3a6bafc8c21a-config-volume\") pod \"coredns-6f6b679f8f-wtbsq\" (UID: \"180cf766-638d-4296-97c0-3a6bafc8c21a\") " pod="kube-system/coredns-6f6b679f8f-wtbsq" May 13 04:21:51.703294 kubelet[2587]: I0513 04:21:51.703213 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjjf\" (UniqueName: \"kubernetes.io/projected/5ec8c59e-3a9a-4a96-81ba-639baf60f4aa-kube-api-access-fjjjf\") pod \"coredns-6f6b679f8f-69vql\" (UID: \"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa\") " pod="kube-system/coredns-6f6b679f8f-69vql" May 13 04:21:51.703294 kubelet[2587]: I0513 04:21:51.703361 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca42f9a3-6ff2-4160-851b-5223b0b75593-calico-apiserver-certs\") pod \"calico-apiserver-66cb4b4698-8cvfj\" (UID: \"ca42f9a3-6ff2-4160-851b-5223b0b75593\") " pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" May 13 04:21:51.703294 kubelet[2587]: I0513 04:21:51.703396 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0559de8b-b04b-4f89-a57d-6943cdbc6076-calico-apiserver-certs\") pod \"calico-apiserver-66cb4b4698-4s2rd\" (UID: \"0559de8b-b04b-4f89-a57d-6943cdbc6076\") " pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" May 13 04:21:51.703622 kubelet[2587]: I0513 04:21:51.703501 2587 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqs9\" (UniqueName: \"kubernetes.io/projected/ca42f9a3-6ff2-4160-851b-5223b0b75593-kube-api-access-gsqs9\") pod \"calico-apiserver-66cb4b4698-8cvfj\" (UID: \"ca42f9a3-6ff2-4160-851b-5223b0b75593\") " pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" May 13 04:21:52.056502 containerd[1461]: time="2025-05-13T04:21:52.056097061Z" level=info msg="shim disconnected" id=fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1 namespace=k8s.io May 13 04:21:52.056502 containerd[1461]: time="2025-05-13T04:21:52.056330461Z" level=warning msg="cleaning up after shim disconnected" id=fdd58d0809ae9d667a50b1055cc86d5de2fe22dc2813278d8a6f5dfedaed80a1 namespace=k8s.io May 13 04:21:52.057610 containerd[1461]: time="2025-05-13T04:21:52.056800547Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 04:21:52.252708 containerd[1461]: time="2025-05-13T04:21:52.252326278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-4s2rd,Uid:0559de8b-b04b-4f89-a57d-6943cdbc6076,Namespace:calico-apiserver,Attempt:0,}" May 13 04:21:52.261861 containerd[1461]: time="2025-05-13T04:21:52.261791368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fbd7ff65-7dt7j,Uid:119e7f55-f555-420b-8c77-74d321630fd9,Namespace:calico-system,Attempt:0,}" May 13 04:21:52.263310 containerd[1461]: time="2025-05-13T04:21:52.263256454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-8cvfj,Uid:ca42f9a3-6ff2-4160-851b-5223b0b75593,Namespace:calico-apiserver,Attempt:0,}" May 13 04:21:52.406225 containerd[1461]: time="2025-05-13T04:21:52.406170431Z" level=error msg="Failed to destroy network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.408459 containerd[1461]: time="2025-05-13T04:21:52.408305903Z" level=error msg="encountered an error cleaning up failed sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.408459 containerd[1461]: time="2025-05-13T04:21:52.408363637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-4s2rd,Uid:0559de8b-b04b-4f89-a57d-6943cdbc6076,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.409191 kubelet[2587]: E0513 04:21:52.408704 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.409191 kubelet[2587]: E0513 04:21:52.408779 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" May 13 04:21:52.409191 kubelet[2587]: E0513 04:21:52.408807 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" May 13 04:21:52.409315 kubelet[2587]: E0513 04:21:52.408858 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66cb4b4698-4s2rd_calico-apiserver(0559de8b-b04b-4f89-a57d-6943cdbc6076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66cb4b4698-4s2rd_calico-apiserver(0559de8b-b04b-4f89-a57d-6943cdbc6076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" podUID="0559de8b-b04b-4f89-a57d-6943cdbc6076" May 13 04:21:52.421746 containerd[1461]: time="2025-05-13T04:21:52.421678274Z" level=error msg="Failed to destroy network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.422180 containerd[1461]: time="2025-05-13T04:21:52.422124216Z" level=error msg="encountered an error cleaning up failed sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.422245 containerd[1461]: time="2025-05-13T04:21:52.422202046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-8cvfj,Uid:ca42f9a3-6ff2-4160-851b-5223b0b75593,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.422758 kubelet[2587]: E0513 04:21:52.422384 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.422758 kubelet[2587]: E0513 04:21:52.422443 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" May 13 04:21:52.422758 kubelet[2587]: E0513 04:21:52.422464 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" May 13 04:21:52.422938 kubelet[2587]: E0513 04:21:52.422502 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66cb4b4698-8cvfj_calico-apiserver(ca42f9a3-6ff2-4160-851b-5223b0b75593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66cb4b4698-8cvfj_calico-apiserver(ca42f9a3-6ff2-4160-851b-5223b0b75593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" podUID="ca42f9a3-6ff2-4160-851b-5223b0b75593" May 13 04:21:52.426277 containerd[1461]: time="2025-05-13T04:21:52.426234141Z" level=error msg="Failed to destroy network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.426602 containerd[1461]: time="2025-05-13T04:21:52.426563855Z" level=error msg="encountered an error cleaning up failed sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.426655 containerd[1461]: time="2025-05-13T04:21:52.426611922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fbd7ff65-7dt7j,Uid:119e7f55-f555-420b-8c77-74d321630fd9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.426926 kubelet[2587]: E0513 04:21:52.426802 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.426926 kubelet[2587]: E0513 04:21:52.426837 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" May 13 04:21:52.426926 kubelet[2587]: E0513 04:21:52.426880 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" May 13 04:21:52.427170 kubelet[2587]: E0513 04:21:52.427081 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69fbd7ff65-7dt7j_calico-system(119e7f55-f555-420b-8c77-74d321630fd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69fbd7ff65-7dt7j_calico-system(119e7f55-f555-420b-8c77-74d321630fd9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" podUID="119e7f55-f555-420b-8c77-74d321630fd9" May 13 04:21:52.699619 systemd[1]: Created slice kubepods-besteffort-pod6b475fce_0503_4d3c_9f11_a776dc4b6dcc.slice - libcontainer container kubepods-besteffort-pod6b475fce_0503_4d3c_9f11_a776dc4b6dcc.slice. May 13 04:21:52.707105 containerd[1461]: time="2025-05-13T04:21:52.707016005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcwgb,Uid:6b475fce-0503-4d3c-9f11-a776dc4b6dcc,Namespace:calico-system,Attempt:0,}" May 13 04:21:52.810047 kubelet[2587]: E0513 04:21:52.809938 2587 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 04:21:52.810047 kubelet[2587]: E0513 04:21:52.810043 2587 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ec8c59e-3a9a-4a96-81ba-639baf60f4aa-config-volume podName:5ec8c59e-3a9a-4a96-81ba-639baf60f4aa nodeName:}" failed. No retries permitted until 2025-05-13 04:21:53.31002506 +0000 UTC m=+27.774200440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5ec8c59e-3a9a-4a96-81ba-639baf60f4aa-config-volume") pod "coredns-6f6b679f8f-69vql" (UID: "5ec8c59e-3a9a-4a96-81ba-639baf60f4aa") : failed to sync configmap cache: timed out waiting for the condition May 13 04:21:52.817030 kubelet[2587]: E0513 04:21:52.815535 2587 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 13 04:21:52.818095 kubelet[2587]: E0513 04:21:52.817067 2587 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/180cf766-638d-4296-97c0-3a6bafc8c21a-config-volume podName:180cf766-638d-4296-97c0-3a6bafc8c21a nodeName:}" failed. No retries permitted until 2025-05-13 04:21:53.317046734 +0000 UTC m=+27.781222114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/180cf766-638d-4296-97c0-3a6bafc8c21a-config-volume") pod "coredns-6f6b679f8f-wtbsq" (UID: "180cf766-638d-4296-97c0-3a6bafc8c21a") : failed to sync configmap cache: timed out waiting for the condition May 13 04:21:52.828162 containerd[1461]: time="2025-05-13T04:21:52.828078484Z" level=error msg="Failed to destroy network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.828564 containerd[1461]: time="2025-05-13T04:21:52.828482811Z" level=error msg="encountered an error cleaning up failed sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.828654 containerd[1461]: time="2025-05-13T04:21:52.828561883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcwgb,Uid:6b475fce-0503-4d3c-9f11-a776dc4b6dcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.829009 kubelet[2587]: E0513 04:21:52.828767 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.829009 kubelet[2587]: E0513 04:21:52.828827 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:52.829009 kubelet[2587]: E0513 04:21:52.828849 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zcwgb" May 13 04:21:52.829554 kubelet[2587]: E0513 04:21:52.828896 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zcwgb_calico-system(6b475fce-0503-4d3c-9f11-a776dc4b6dcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zcwgb_calico-system(6b475fce-0503-4d3c-9f11-a776dc4b6dcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:52.833898 kubelet[2587]: I0513 04:21:52.833560 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:21:52.834810 containerd[1461]: time="2025-05-13T04:21:52.834358143Z" level=info msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" May 13 04:21:52.834810 containerd[1461]: time="2025-05-13T04:21:52.834548986Z" level=info msg="Ensure that sandbox cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c in task-service has been cleanup successfully" May 13 04:21:52.837378 kubelet[2587]: I0513 04:21:52.836569 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:21:52.838237 containerd[1461]: time="2025-05-13T04:21:52.837425773Z" level=info msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" May 13 04:21:52.838237 containerd[1461]: time="2025-05-13T04:21:52.837763029Z" level=info msg="Ensure that sandbox da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30 in task-service has been cleanup successfully" May 13 04:21:52.851817 containerd[1461]: time="2025-05-13T04:21:52.851454374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 04:21:52.852610 kubelet[2587]: I0513 04:21:52.852571 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:21:52.853136 containerd[1461]: time="2025-05-13T04:21:52.853115052Z" level=info msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" May 13 04:21:52.853550 containerd[1461]: time="2025-05-13T04:21:52.853302660Z" level=info msg="Ensure that sandbox 169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d in task-service has been cleanup successfully" May 13 04:21:52.858825 kubelet[2587]: I0513 04:21:52.858764 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:21:52.860053 containerd[1461]: time="2025-05-13T04:21:52.860024345Z" level=info msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" May 13 04:21:52.863573 containerd[1461]: time="2025-05-13T04:21:52.863534801Z" level=info msg="Ensure that sandbox c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225 in task-service has been cleanup successfully" May 13 04:21:52.929426 containerd[1461]: time="2025-05-13T04:21:52.929371421Z" level=error msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" failed" error="failed to destroy network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.929889 kubelet[2587]: E0513 04:21:52.929828 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:21:52.929953 kubelet[2587]: E0513 04:21:52.929903 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30"} May 13 04:21:52.930152 kubelet[2587]: E0513 04:21:52.930000 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca42f9a3-6ff2-4160-851b-5223b0b75593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:52.930152 kubelet[2587]: E0513 04:21:52.930030 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca42f9a3-6ff2-4160-851b-5223b0b75593\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" podUID="ca42f9a3-6ff2-4160-851b-5223b0b75593" May 13 04:21:52.935799 containerd[1461]: time="2025-05-13T04:21:52.935756200Z" level=error msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" failed" error="failed to destroy network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.936415 kubelet[2587]: E0513 04:21:52.936342 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:21:52.936604 kubelet[2587]: E0513 04:21:52.936526 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c"} May 13 04:21:52.937076 kubelet[2587]: E0513 04:21:52.937002 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:52.937076 kubelet[2587]: E0513 04:21:52.937042 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6b475fce-0503-4d3c-9f11-a776dc4b6dcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zcwgb" podUID="6b475fce-0503-4d3c-9f11-a776dc4b6dcc" May 13 04:21:52.940433 containerd[1461]: time="2025-05-13T04:21:52.940400095Z" level=error msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" failed" error="failed to destroy network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.940858 kubelet[2587]: E0513 04:21:52.940712 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:21:52.940858 kubelet[2587]: E0513 04:21:52.940769 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d"} May 13 04:21:52.940858 kubelet[2587]: E0513 04:21:52.940801 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"119e7f55-f555-420b-8c77-74d321630fd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:52.940858 kubelet[2587]: E0513 04:21:52.940824 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"119e7f55-f555-420b-8c77-74d321630fd9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" podUID="119e7f55-f555-420b-8c77-74d321630fd9" May 13 04:21:52.949495 containerd[1461]: time="2025-05-13T04:21:52.949466138Z" level=error msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" failed" error="failed to destroy network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:52.949861 kubelet[2587]: E0513 04:21:52.949792 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:21:52.949917 kubelet[2587]: E0513 04:21:52.949860 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225"} May 13 04:21:52.949917 kubelet[2587]: E0513 04:21:52.949903 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0559de8b-b04b-4f89-a57d-6943cdbc6076\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:52.950026 kubelet[2587]: E0513 04:21:52.949931 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0559de8b-b04b-4f89-a57d-6943cdbc6076\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" podUID="0559de8b-b04b-4f89-a57d-6943cdbc6076" May 13 04:21:53.267351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d-shm.mount: Deactivated successfully. May 13 04:21:53.267591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30-shm.mount: Deactivated successfully. May 13 04:21:53.267752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225-shm.mount: Deactivated successfully. May 13 04:21:53.439490 containerd[1461]: time="2025-05-13T04:21:53.439365776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69vql,Uid:5ec8c59e-3a9a-4a96-81ba-639baf60f4aa,Namespace:kube-system,Attempt:0,}" May 13 04:21:53.447730 containerd[1461]: time="2025-05-13T04:21:53.447614226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtbsq,Uid:180cf766-638d-4296-97c0-3a6bafc8c21a,Namespace:kube-system,Attempt:0,}" May 13 04:21:53.582923 containerd[1461]: time="2025-05-13T04:21:53.582792088Z" level=error msg="Failed to destroy network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.583353 containerd[1461]: time="2025-05-13T04:21:53.583219098Z" level=error msg="encountered an error cleaning up failed sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.583353 containerd[1461]: time="2025-05-13T04:21:53.583267985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69vql,Uid:5ec8c59e-3a9a-4a96-81ba-639baf60f4aa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.584133 kubelet[2587]: E0513 04:21:53.583576 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.584133 kubelet[2587]: E0513 04:21:53.583641 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69vql" May 13 04:21:53.584133 kubelet[2587]: E0513 04:21:53.583662 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-69vql" May 13 04:21:53.584263 kubelet[2587]: E0513 04:21:53.583712 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-69vql_kube-system(5ec8c59e-3a9a-4a96-81ba-639baf60f4aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-69vql_kube-system(5ec8c59e-3a9a-4a96-81ba-639baf60f4aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69vql" podUID="5ec8c59e-3a9a-4a96-81ba-639baf60f4aa" May 13 04:21:53.599467 containerd[1461]: time="2025-05-13T04:21:53.599345791Z" level=error msg="Failed to destroy network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.600797 containerd[1461]: time="2025-05-13T04:21:53.599763435Z" level=error msg="encountered an error cleaning up failed sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.600797 containerd[1461]: time="2025-05-13T04:21:53.599822271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtbsq,Uid:180cf766-638d-4296-97c0-3a6bafc8c21a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.600888 kubelet[2587]: E0513 04:21:53.600021 2587 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.600888 kubelet[2587]: E0513 04:21:53.600076 2587 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wtbsq" May 13 04:21:53.600888 kubelet[2587]: E0513 04:21:53.600098 2587 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wtbsq" May 13 04:21:53.601022 kubelet[2587]: E0513 04:21:53.600143 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wtbsq_kube-system(180cf766-638d-4296-97c0-3a6bafc8c21a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wtbsq_kube-system(180cf766-638d-4296-97c0-3a6bafc8c21a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wtbsq" podUID="180cf766-638d-4296-97c0-3a6bafc8c21a" May 13 04:21:53.865384 kubelet[2587]: I0513 04:21:53.865198 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:21:53.867927 containerd[1461]: time="2025-05-13T04:21:53.867632229Z" level=info msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" May 13 04:21:53.870246 containerd[1461]: time="2025-05-13T04:21:53.870145523Z" level=info msg="Ensure that sandbox e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371 in task-service has been cleanup successfully" May 13 04:21:53.872396 kubelet[2587]: I0513 04:21:53.872227 2587 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:21:53.875001 containerd[1461]: time="2025-05-13T04:21:53.874712181Z" level=info msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" May 13 04:21:53.875885 containerd[1461]: time="2025-05-13T04:21:53.875681089Z" level=info msg="Ensure that sandbox 4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784 in task-service has been cleanup successfully" May 13 04:21:53.939973 containerd[1461]: time="2025-05-13T04:21:53.939820642Z" level=error msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" failed" error="failed to destroy network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.940411 kubelet[2587]: E0513 04:21:53.940257 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:21:53.940411 kubelet[2587]: E0513 04:21:53.940313 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371"} May 13 04:21:53.940411 kubelet[2587]: E0513 04:21:53.940354 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:53.940411 kubelet[2587]: E0513 04:21:53.940381 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-69vql" podUID="5ec8c59e-3a9a-4a96-81ba-639baf60f4aa" May 13 04:21:53.945820 containerd[1461]: time="2025-05-13T04:21:53.945768592Z" level=error msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" failed" error="failed to destroy network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 04:21:53.946127 kubelet[2587]: E0513 04:21:53.946080 2587 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:21:53.946185 kubelet[2587]: E0513 04:21:53.946136 2587 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784"} May 13 04:21:53.946185 kubelet[2587]: E0513 04:21:53.946174 2587 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"180cf766-638d-4296-97c0-3a6bafc8c21a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 04:21:53.946262 kubelet[2587]: E0513 04:21:53.946201 2587 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"180cf766-638d-4296-97c0-3a6bafc8c21a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wtbsq" podUID="180cf766-638d-4296-97c0-3a6bafc8c21a" May 13 04:21:54.267750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784-shm.mount: Deactivated successfully. May 13 04:21:54.268019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371-shm.mount: Deactivated successfully. May 13 04:21:58.800223 kubelet[2587]: I0513 04:21:58.799443 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:22:01.550928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536082647.mount: Deactivated successfully. May 13 04:22:01.608878 containerd[1461]: time="2025-05-13T04:22:01.607870233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:01.609336 containerd[1461]: time="2025-05-13T04:22:01.609302078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 04:22:01.610588 containerd[1461]: time="2025-05-13T04:22:01.610554954Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:01.615210 containerd[1461]: time="2025-05-13T04:22:01.615108119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:01.616589 containerd[1461]: time="2025-05-13T04:22:01.616450941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.764950853s" May 13 04:22:01.616589 containerd[1461]: time="2025-05-13T04:22:01.616516090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 04:22:01.631770 containerd[1461]: time="2025-05-13T04:22:01.631684261Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 04:22:01.661971 containerd[1461]: time="2025-05-13T04:22:01.661883934Z" level=info msg="CreateContainer within sandbox \"d3286b72d317708f508712c298e1eb719e608382da7e7b569cd8657a68466ecb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2\"" May 13 04:22:01.663919 containerd[1461]: time="2025-05-13T04:22:01.662807276Z" level=info msg="StartContainer for \"cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2\"" May 13 04:22:01.707533 systemd[1]: Started cri-containerd-cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2.scope - libcontainer container cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2. May 13 04:22:01.742548 containerd[1461]: time="2025-05-13T04:22:01.742426077Z" level=info msg="StartContainer for \"cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2\" returns successfully" May 13 04:22:01.816674 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 04:22:01.816829 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 04:22:03.490008 kernel: bpftool[3898]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 04:22:03.832106 systemd-networkd[1374]: vxlan.calico: Link UP May 13 04:22:03.832112 systemd-networkd[1374]: vxlan.calico: Gained carrier May 13 04:22:04.970443 systemd-networkd[1374]: vxlan.calico: Gained IPv6LL May 13 04:22:05.691642 containerd[1461]: time="2025-05-13T04:22:05.691553670Z" level=info msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" May 13 04:22:05.692553 containerd[1461]: time="2025-05-13T04:22:05.692404969Z" level=info msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" May 13 04:22:05.697921 containerd[1461]: time="2025-05-13T04:22:05.696920669Z" level=info msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" May 13 04:22:05.701557 containerd[1461]: time="2025-05-13T04:22:05.697809738Z" level=info msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" May 13 04:22:06.691566 containerd[1461]: time="2025-05-13T04:22:06.691419529Z" level=info msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" May 13 04:22:06.772121 kubelet[2587]: I0513 04:22:06.770825 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wfkck" podStartSLOduration=6.07284634 podStartE2EDuration="30.770807431s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:21:36.920047734 +0000 UTC m=+11.384223115" lastFinishedPulling="2025-05-13 04:22:01.618008826 +0000 UTC m=+36.082184206" observedRunningTime="2025-05-13 04:22:01.946088376 +0000 UTC m=+36.410263756" watchObservedRunningTime="2025-05-13 04:22:06.770807431 +0000 UTC m=+41.234982811" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.791 [INFO][4049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.793 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" iface="eth0" netns="/var/run/netns/cni-6da0c8c6-8df6-ff86-19e6-4ed02405cecf" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.794 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" iface="eth0" netns="/var/run/netns/cni-6da0c8c6-8df6-ff86-19e6-4ed02405cecf" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.795 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" iface="eth0" netns="/var/run/netns/cni-6da0c8c6-8df6-ff86-19e6-4ed02405cecf" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.796 [INFO][4049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.796 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.887 [INFO][4102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.891 [INFO][4102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.892 [INFO][4102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.909 [WARNING][4102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.909 [INFO][4102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.911 [INFO][4102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:06.922912 containerd[1461]: 2025-05-13 04:22:06.920 [INFO][4049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:06.925100 containerd[1461]: time="2025-05-13T04:22:06.924219610Z" level=info msg="TearDown network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" successfully" May 13 04:22:06.925100 containerd[1461]: time="2025-05-13T04:22:06.924249986Z" level=info msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" returns successfully" May 13 04:22:06.927478 containerd[1461]: time="2025-05-13T04:22:06.927140028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69vql,Uid:5ec8c59e-3a9a-4a96-81ba-639baf60f4aa,Namespace:kube-system,Attempt:1,}" May 13 04:22:06.929014 systemd[1]: run-netns-cni\x2d6da0c8c6\x2d8df6\x2dff86\x2d19e6\x2d4ed02405cecf.mount: Deactivated successfully. May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.786 [INFO][4048] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.786 [INFO][4048] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" iface="eth0" netns="/var/run/netns/cni-09b0b56f-945e-9e36-7905-8f741adb6805" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.786 [INFO][4048] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" iface="eth0" netns="/var/run/netns/cni-09b0b56f-945e-9e36-7905-8f741adb6805" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.789 [INFO][4048] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" iface="eth0" netns="/var/run/netns/cni-09b0b56f-945e-9e36-7905-8f741adb6805" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.789 [INFO][4048] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.790 [INFO][4048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.884 [INFO][4100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.894 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.912 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.940 [WARNING][4100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.942 [INFO][4100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.944 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:06.952231 containerd[1461]: 2025-05-13 04:22:06.949 [INFO][4048] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:06.955496 containerd[1461]: time="2025-05-13T04:22:06.953150517Z" level=info msg="TearDown network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" successfully" May 13 04:22:06.955496 containerd[1461]: time="2025-05-13T04:22:06.953184750Z" level=info msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" returns successfully" May 13 04:22:06.956613 containerd[1461]: time="2025-05-13T04:22:06.956319933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fbd7ff65-7dt7j,Uid:119e7f55-f555-420b-8c77-74d321630fd9,Namespace:calico-system,Attempt:1,}" May 13 04:22:06.957494 systemd[1]: run-netns-cni\x2d09b0b56f\x2d945e\x2d9e36\x2d7905\x2d8f741adb6805.mount: Deactivated successfully. May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.771 [INFO][4047] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.779 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" iface="eth0" netns="/var/run/netns/cni-063c1f79-9760-514b-7659-2bb1837260b7" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.781 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" iface="eth0" netns="/var/run/netns/cni-063c1f79-9760-514b-7659-2bb1837260b7" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" iface="eth0" netns="/var/run/netns/cni-063c1f79-9760-514b-7659-2bb1837260b7" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4047] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.907 [INFO][4097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.908 [INFO][4097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.945 [INFO][4097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.981 [WARNING][4097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.981 [INFO][4097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.983 [INFO][4097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:06.987905 containerd[1461]: 2025-05-13 04:22:06.986 [INFO][4047] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:06.992105 containerd[1461]: time="2025-05-13T04:22:06.991617515Z" level=info msg="TearDown network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" successfully" May 13 04:22:06.992105 containerd[1461]: time="2025-05-13T04:22:06.991649905Z" level=info msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" returns successfully" May 13 04:22:06.993508 systemd[1]: run-netns-cni\x2d063c1f79\x2d9760\x2d514b\x2d7659\x2d2bb1837260b7.mount: Deactivated successfully. May 13 04:22:06.995399 containerd[1461]: time="2025-05-13T04:22:06.995373444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcwgb,Uid:6b475fce-0503-4d3c-9f11-a776dc4b6dcc,Namespace:calico-system,Attempt:1,}" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.776 [INFO][4059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.779 [INFO][4059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" iface="eth0" netns="/var/run/netns/cni-193860a0-b2b5-6cf5-67e2-4b8752cef4b8" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.779 [INFO][4059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" iface="eth0" netns="/var/run/netns/cni-193860a0-b2b5-6cf5-67e2-4b8752cef4b8" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" iface="eth0" netns="/var/run/netns/cni-193860a0-b2b5-6cf5-67e2-4b8752cef4b8" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.783 [INFO][4059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.917 [INFO][4098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.917 [INFO][4098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:06.984 [INFO][4098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:07.008 [WARNING][4098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:07.008 [INFO][4098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:07.011 [INFO][4098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.020345 containerd[1461]: 2025-05-13 04:22:07.015 [INFO][4059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:07.021398 containerd[1461]: time="2025-05-13T04:22:07.021354909Z" level=info msg="TearDown network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" successfully" May 13 04:22:07.021516 containerd[1461]: time="2025-05-13T04:22:07.021497934Z" level=info msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" returns successfully" May 13 04:22:07.023570 containerd[1461]: time="2025-05-13T04:22:07.022791422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-8cvfj,Uid:ca42f9a3-6ff2-4160-851b-5223b0b75593,Namespace:calico-apiserver,Attempt:1,}" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.815 [INFO][4089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.815 [INFO][4089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" iface="eth0" netns="/var/run/netns/cni-4ca514e2-e6c3-5436-2695-201a8608bc0e" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.815 [INFO][4089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" iface="eth0" netns="/var/run/netns/cni-4ca514e2-e6c3-5436-2695-201a8608bc0e" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.816 [INFO][4089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" iface="eth0" netns="/var/run/netns/cni-4ca514e2-e6c3-5436-2695-201a8608bc0e" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.816 [INFO][4089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.816 [INFO][4089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.935 [INFO][4116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:06.936 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:07.012 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:07.038 [WARNING][4116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:07.039 [INFO][4116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:07.045 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.055319 containerd[1461]: 2025-05-13 04:22:07.049 [INFO][4089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:07.055878 containerd[1461]: time="2025-05-13T04:22:07.055448605Z" level=info msg="TearDown network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" successfully" May 13 04:22:07.055878 containerd[1461]: time="2025-05-13T04:22:07.055525427Z" level=info msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" returns successfully" May 13 04:22:07.058292 containerd[1461]: time="2025-05-13T04:22:07.058063013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtbsq,Uid:180cf766-638d-4296-97c0-3a6bafc8c21a,Namespace:kube-system,Attempt:1,}" May 13 04:22:07.382752 systemd-networkd[1374]: calibb119b8c4ea: Link UP May 13 04:22:07.383784 systemd-networkd[1374]: calibb119b8c4ea: Gained carrier May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.083 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0 coredns-6f6b679f8f- kube-system 5ec8c59e-3a9a-4a96-81ba-639baf60f4aa 756 0 2025-05-13 04:21:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal coredns-6f6b679f8f-69vql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb119b8c4ea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.087 [INFO][4130] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.205 [INFO][4189] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" HandleID="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.258 [INFO][4189] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" HandleID="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004091b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"coredns-6f6b679f8f-69vql", "timestamp":"2025-05-13 04:22:07.205921593 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.258 [INFO][4189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.258 [INFO][4189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.258 [INFO][4189] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.267 [INFO][4189] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.304 [INFO][4189] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.320 [INFO][4189] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.324 [INFO][4189] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.332 [INFO][4189] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.332 [INFO][4189] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.335 [INFO][4189] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6 May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.348 [INFO][4189] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.368 [INFO][4189] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.129/26] block=192.168.11.128/26 handle="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.368 [INFO][4189] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.129/26] handle="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.368 [INFO][4189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.413084 containerd[1461]: 2025-05-13 04:22:07.368 [INFO][4189] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.129/26] IPv6=[] ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" HandleID="k8s-pod-network.6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.373 [INFO][4130] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-69vql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb119b8c4ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.373 [INFO][4130] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.129/32] ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.374 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb119b8c4ea ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.386 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.386 [INFO][4130] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6", Pod:"coredns-6f6b679f8f-69vql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb119b8c4ea", MAC:"a6:2d:1f:99:cd:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.414284 containerd[1461]: 2025-05-13 04:22:07.408 [INFO][4130] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6" Namespace="kube-system" Pod="coredns-6f6b679f8f-69vql" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:07.470734 containerd[1461]: time="2025-05-13T04:22:07.469992627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:07.470734 containerd[1461]: time="2025-05-13T04:22:07.470115874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:07.470734 containerd[1461]: time="2025-05-13T04:22:07.470155287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.470734 containerd[1461]: time="2025-05-13T04:22:07.470282903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.484692 systemd-networkd[1374]: cali3c6fa05ff04: Link UP May 13 04:22:07.488315 systemd-networkd[1374]: cali3c6fa05ff04: Gained carrier May 13 04:22:07.500789 systemd[1]: Started cri-containerd-6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6.scope - libcontainer container 6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6. May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.141 [INFO][4141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0 calico-kube-controllers-69fbd7ff65- calico-system 119e7f55-f555-420b-8c77-74d321630fd9 755 0 2025-05-13 04:21:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69fbd7ff65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal calico-kube-controllers-69fbd7ff65-7dt7j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3c6fa05ff04 [] []}} ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.142 [INFO][4141] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.251 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" HandleID="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.273 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" HandleID="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050230), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"calico-kube-controllers-69fbd7ff65-7dt7j", "timestamp":"2025-05-13 04:22:07.251579353 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.273 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.368 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.369 [INFO][4199] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.375 [INFO][4199] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.396 [INFO][4199] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.421 [INFO][4199] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.427 [INFO][4199] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.434 [INFO][4199] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.435 [INFO][4199] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.440 [INFO][4199] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245 May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.450 [INFO][4199] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.464 [INFO][4199] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.130/26] block=192.168.11.128/26 handle="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.465 [INFO][4199] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.130/26] handle="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.465 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.532315 containerd[1461]: 2025-05-13 04:22:07.465 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.130/26] IPv6=[] ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" HandleID="k8s-pod-network.8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.475 [INFO][4141] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0", GenerateName:"calico-kube-controllers-69fbd7ff65-", Namespace:"calico-system", SelfLink:"", UID:"119e7f55-f555-420b-8c77-74d321630fd9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fbd7ff65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"calico-kube-controllers-69fbd7ff65-7dt7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c6fa05ff04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.475 [INFO][4141] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.130/32] ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.475 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c6fa05ff04 ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.491 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.493 [INFO][4141] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0", GenerateName:"calico-kube-controllers-69fbd7ff65-", Namespace:"calico-system", SelfLink:"", UID:"119e7f55-f555-420b-8c77-74d321630fd9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fbd7ff65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245", Pod:"calico-kube-controllers-69fbd7ff65-7dt7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c6fa05ff04", MAC:"9e:a4:92:19:b0:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.533803 containerd[1461]: 2025-05-13 04:22:07.527 [INFO][4141] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245" Namespace="calico-system" Pod="calico-kube-controllers-69fbd7ff65-7dt7j" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:07.586614 containerd[1461]: time="2025-05-13T04:22:07.586487382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:07.586614 containerd[1461]: time="2025-05-13T04:22:07.586565696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:07.586614 containerd[1461]: time="2025-05-13T04:22:07.586583359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.587334 containerd[1461]: time="2025-05-13T04:22:07.587233549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.601069 systemd-networkd[1374]: calic16d654aae2: Link UP May 13 04:22:07.603281 systemd-networkd[1374]: calic16d654aae2: Gained carrier May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.220 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0 csi-node-driver- calico-system 6b475fce-0503-4d3c-9f11-a776dc4b6dcc 753 0 2025-05-13 04:21:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal csi-node-driver-zcwgb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic16d654aae2 [] []}} ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.220 [INFO][4159] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.299 [INFO][4208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" HandleID="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.323 [INFO][4208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" HandleID="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"csi-node-driver-zcwgb", "timestamp":"2025-05-13 04:22:07.299229312 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.323 [INFO][4208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.465 [INFO][4208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.466 [INFO][4208] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.480 [INFO][4208] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.504 [INFO][4208] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.531 [INFO][4208] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.539 [INFO][4208] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.546 [INFO][4208] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.547 [INFO][4208] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.550 [INFO][4208] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.561 [INFO][4208] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.583 [INFO][4208] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.131/26] block=192.168.11.128/26 handle="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.583 [INFO][4208] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.131/26] handle="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.583 [INFO][4208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.642262 containerd[1461]: 2025-05-13 04:22:07.583 [INFO][4208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.131/26] IPv6=[] ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" HandleID="k8s-pod-network.95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.592 [INFO][4159] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b475fce-0503-4d3c-9f11-a776dc4b6dcc", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"csi-node-driver-zcwgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16d654aae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.594 [INFO][4159] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.131/32] ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.594 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic16d654aae2 ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.607 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.611 [INFO][4159] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b475fce-0503-4d3c-9f11-a776dc4b6dcc", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f", Pod:"csi-node-driver-zcwgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16d654aae2", MAC:"1a:ff:29:c3:55:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.645202 containerd[1461]: 2025-05-13 04:22:07.633 [INFO][4159] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f" Namespace="calico-system" Pod="csi-node-driver-zcwgb" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:07.661238 containerd[1461]: time="2025-05-13T04:22:07.660800796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-69vql,Uid:5ec8c59e-3a9a-4a96-81ba-639baf60f4aa,Namespace:kube-system,Attempt:1,} returns sandbox id \"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6\"" May 13 04:22:07.670490 systemd[1]: Started cri-containerd-8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245.scope - libcontainer container 8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245. May 13 04:22:07.676481 containerd[1461]: time="2025-05-13T04:22:07.676339910Z" level=info msg="CreateContainer within sandbox \"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 04:22:07.690543 containerd[1461]: time="2025-05-13T04:22:07.690472587Z" level=info msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" May 13 04:22:07.709345 containerd[1461]: time="2025-05-13T04:22:07.709073905Z" level=info msg="CreateContainer within sandbox \"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d6b87cc42078f9d77054c87ffca311bb1edc0a4e95639395f1d0d3492345e0a\"" May 13 04:22:07.711939 containerd[1461]: time="2025-05-13T04:22:07.711130252Z" level=info msg="StartContainer for \"7d6b87cc42078f9d77054c87ffca311bb1edc0a4e95639395f1d0d3492345e0a\"" May 13 04:22:07.747679 containerd[1461]: time="2025-05-13T04:22:07.747584657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:07.747944 containerd[1461]: time="2025-05-13T04:22:07.747918414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:07.748089 containerd[1461]: time="2025-05-13T04:22:07.748063632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.748384 containerd[1461]: time="2025-05-13T04:22:07.748346264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.761201 systemd-networkd[1374]: cali051dd3f6b74: Link UP May 13 04:22:07.770814 systemd-networkd[1374]: cali051dd3f6b74: Gained carrier May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.227 [INFO][4154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0 calico-apiserver-66cb4b4698- calico-apiserver ca42f9a3-6ff2-4160-851b-5223b0b75593 754 0 2025-05-13 04:21:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66cb4b4698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal calico-apiserver-66cb4b4698-8cvfj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali051dd3f6b74 [] []}} ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.227 [INFO][4154] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.373 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" HandleID="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.402 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" HandleID="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"calico-apiserver-66cb4b4698-8cvfj", "timestamp":"2025-05-13 04:22:07.373675 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.402 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.584 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.584 [INFO][4210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.597 [INFO][4210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.620 [INFO][4210] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.653 [INFO][4210] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.660 [INFO][4210] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.675 [INFO][4210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.675 [INFO][4210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.679 [INFO][4210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62 May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.695 [INFO][4210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.715 [INFO][4210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.132/26] block=192.168.11.128/26 handle="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.716 [INFO][4210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.132/26] handle="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.716 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.803151 containerd[1461]: 2025-05-13 04:22:07.716 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.132/26] IPv6=[] ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" HandleID="k8s-pod-network.047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.728 [INFO][4154] cni-plugin/k8s.go 386: Populated endpoint ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca42f9a3-6ff2-4160-851b-5223b0b75593", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"calico-apiserver-66cb4b4698-8cvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali051dd3f6b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.731 [INFO][4154] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.132/32] ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.731 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali051dd3f6b74 ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.774 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.778 [INFO][4154] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca42f9a3-6ff2-4160-851b-5223b0b75593", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62", Pod:"calico-apiserver-66cb4b4698-8cvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali051dd3f6b74", MAC:"ee:8e:8d:c8:92:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.804565 containerd[1461]: 2025-05-13 04:22:07.798 [INFO][4154] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-8cvfj" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:07.842838 systemd-networkd[1374]: calid97eb2b02ec: Link UP May 13 04:22:07.849114 systemd-networkd[1374]: calid97eb2b02ec: Gained carrier May 13 04:22:07.876525 systemd[1]: Started cri-containerd-7d6b87cc42078f9d77054c87ffca311bb1edc0a4e95639395f1d0d3492345e0a.scope - libcontainer container 7d6b87cc42078f9d77054c87ffca311bb1edc0a4e95639395f1d0d3492345e0a. May 13 04:22:07.891163 systemd[1]: Started cri-containerd-95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f.scope - libcontainer container 95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f. May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.271 [INFO][4179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0 coredns-6f6b679f8f- kube-system 180cf766-638d-4296-97c0-3a6bafc8c21a 757 0 2025-05-13 04:21:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal coredns-6f6b679f8f-wtbsq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid97eb2b02ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.272 [INFO][4179] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.428 [INFO][4220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" HandleID="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.513 [INFO][4220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" HandleID="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ea2e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"coredns-6f6b679f8f-wtbsq", "timestamp":"2025-05-13 04:22:07.428698154 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.514 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.717 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.717 [INFO][4220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.722 [INFO][4220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.736 [INFO][4220] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.750 [INFO][4220] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.758 [INFO][4220] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.774 [INFO][4220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.774 [INFO][4220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.778 [INFO][4220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.787 [INFO][4220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.804 [INFO][4220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.133/26] block=192.168.11.128/26 handle="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.805 [INFO][4220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.133/26] handle="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.806 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:07.907479 containerd[1461]: 2025-05-13 04:22:07.806 [INFO][4220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.133/26] IPv6=[] ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" HandleID="k8s-pod-network.e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.823 [INFO][4179] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"180cf766-638d-4296-97c0-3a6bafc8c21a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-wtbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid97eb2b02ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.823 [INFO][4179] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.133/32] ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.823 [INFO][4179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid97eb2b02ec ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.851 [INFO][4179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.869 [INFO][4179] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"180cf766-638d-4296-97c0-3a6bafc8c21a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a", Pod:"coredns-6f6b679f8f-wtbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid97eb2b02ec", MAC:"92:60:8d:e7:2b:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:07.911482 containerd[1461]: 2025-05-13 04:22:07.903 [INFO][4179] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wtbsq" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:07.946335 systemd[1]: run-netns-cni\x2d4ca514e2\x2de6c3\x2d5436\x2d2695\x2d201a8608bc0e.mount: Deactivated successfully. May 13 04:22:07.946426 systemd[1]: run-netns-cni\x2d193860a0\x2db2b5\x2d6cf5\x2d67e2\x2d4b8752cef4b8.mount: Deactivated successfully. May 13 04:22:07.968457 containerd[1461]: time="2025-05-13T04:22:07.968059618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:07.968457 containerd[1461]: time="2025-05-13T04:22:07.968145107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:07.968457 containerd[1461]: time="2025-05-13T04:22:07.968166265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.968457 containerd[1461]: time="2025-05-13T04:22:07.968263145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:07.993677 containerd[1461]: time="2025-05-13T04:22:07.993536924Z" level=info msg="StartContainer for \"7d6b87cc42078f9d77054c87ffca311bb1edc0a4e95639395f1d0d3492345e0a\" returns successfully" May 13 04:22:08.032414 systemd[1]: run-containerd-runc-k8s.io-047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62-runc.vP7Ssa.mount: Deactivated successfully. May 13 04:22:08.044139 systemd[1]: Started cri-containerd-047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62.scope - libcontainer container 047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62. May 13 04:22:08.064704 containerd[1461]: time="2025-05-13T04:22:08.064384804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:08.065202 containerd[1461]: time="2025-05-13T04:22:08.064651948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:08.065202 containerd[1461]: time="2025-05-13T04:22:08.065171287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:08.066322 containerd[1461]: time="2025-05-13T04:22:08.066265240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:08.129232 containerd[1461]: time="2025-05-13T04:22:08.128941549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69fbd7ff65-7dt7j,Uid:119e7f55-f555-420b-8c77-74d321630fd9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245\"" May 13 04:22:08.133108 containerd[1461]: time="2025-05-13T04:22:08.132874107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 04:22:08.164402 systemd[1]: Started cri-containerd-e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a.scope - libcontainer container e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a. May 13 04:22:08.181513 containerd[1461]: time="2025-05-13T04:22:08.181105937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zcwgb,Uid:6b475fce-0503-4d3c-9f11-a776dc4b6dcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f\"" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.035 [INFO][4371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.035 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" iface="eth0" netns="/var/run/netns/cni-ce7c1692-5d17-8ef4-d1b9-fcb56f652cd8" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.036 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" iface="eth0" netns="/var/run/netns/cni-ce7c1692-5d17-8ef4-d1b9-fcb56f652cd8" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.037 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" iface="eth0" netns="/var/run/netns/cni-ce7c1692-5d17-8ef4-d1b9-fcb56f652cd8" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.037 [INFO][4371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.037 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.187 [INFO][4489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.188 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.188 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.205 [WARNING][4489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.205 [INFO][4489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.208 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:08.216051 containerd[1461]: 2025-05-13 04:22:08.212 [INFO][4371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:08.216519 containerd[1461]: time="2025-05-13T04:22:08.216392310Z" level=info msg="TearDown network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" successfully" May 13 04:22:08.216519 containerd[1461]: time="2025-05-13T04:22:08.216441722Z" level=info msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" returns successfully" May 13 04:22:08.218454 containerd[1461]: time="2025-05-13T04:22:08.217796485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-4s2rd,Uid:0559de8b-b04b-4f89-a57d-6943cdbc6076,Namespace:calico-apiserver,Attempt:1,}" May 13 04:22:08.283058 containerd[1461]: time="2025-05-13T04:22:08.282990328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wtbsq,Uid:180cf766-638d-4296-97c0-3a6bafc8c21a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a\"" May 13 04:22:08.292658 containerd[1461]: time="2025-05-13T04:22:08.292129912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-8cvfj,Uid:ca42f9a3-6ff2-4160-851b-5223b0b75593,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62\"" May 13 04:22:08.294074 containerd[1461]: time="2025-05-13T04:22:08.293794819Z" level=info msg="CreateContainer within sandbox \"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 04:22:08.332287 containerd[1461]: time="2025-05-13T04:22:08.332212148Z" level=info msg="CreateContainer within sandbox \"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c10bac9e02dd4536d0720977d194dd410070e1e6d3efd6a5478b3614531811d\"" May 13 04:22:08.333949 containerd[1461]: time="2025-05-13T04:22:08.333730204Z" level=info msg="StartContainer for \"8c10bac9e02dd4536d0720977d194dd410070e1e6d3efd6a5478b3614531811d\"" May 13 04:22:08.380267 systemd[1]: Started cri-containerd-8c10bac9e02dd4536d0720977d194dd410070e1e6d3efd6a5478b3614531811d.scope - libcontainer container 8c10bac9e02dd4536d0720977d194dd410070e1e6d3efd6a5478b3614531811d. May 13 04:22:08.431110 containerd[1461]: time="2025-05-13T04:22:08.430986831Z" level=info msg="StartContainer for \"8c10bac9e02dd4536d0720977d194dd410070e1e6d3efd6a5478b3614531811d\" returns successfully" May 13 04:22:08.554119 systemd-networkd[1374]: calicf26282e020: Link UP May 13 04:22:08.554326 systemd-networkd[1374]: calicf26282e020: Gained carrier May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.327 [INFO][4550] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0 calico-apiserver-66cb4b4698- calico-apiserver 0559de8b-b04b-4f89-a57d-6943cdbc6076 778 0 2025-05-13 04:21:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66cb4b4698 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-3bdfb8ea63.novalocal calico-apiserver-66cb4b4698-4s2rd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf26282e020 [] []}} ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.327 [INFO][4550] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.399 [INFO][4578] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" HandleID="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.424 [INFO][4578] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" HandleID="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011de80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-3bdfb8ea63.novalocal", "pod":"calico-apiserver-66cb4b4698-4s2rd", "timestamp":"2025-05-13 04:22:08.399584647 +0000 UTC"}, Hostname:"ci-4081-3-3-n-3bdfb8ea63.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.425 [INFO][4578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.425 [INFO][4578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.425 [INFO][4578] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-3bdfb8ea63.novalocal' May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.428 [INFO][4578] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.520 [INFO][4578] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.527 [INFO][4578] ipam/ipam.go 489: Trying affinity for 192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.530 [INFO][4578] ipam/ipam.go 155: Attempting to load block cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.532 [INFO][4578] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.11.128/26 host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.533 [INFO][4578] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.11.128/26 handle="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.534 [INFO][4578] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.541 [INFO][4578] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.11.128/26 handle="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.548 [INFO][4578] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.11.134/26] block=192.168.11.128/26 handle="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.548 [INFO][4578] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.11.134/26] handle="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" host="ci-4081-3-3-n-3bdfb8ea63.novalocal" May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.548 [INFO][4578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:08.573922 containerd[1461]: 2025-05-13 04:22:08.548 [INFO][4578] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.11.134/26] IPv6=[] ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" HandleID="k8s-pod-network.c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.550 [INFO][4550] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"0559de8b-b04b-4f89-a57d-6943cdbc6076", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"", Pod:"calico-apiserver-66cb4b4698-4s2rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf26282e020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.550 [INFO][4550] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.11.134/32] ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.550 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf26282e020 ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.553 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.554 [INFO][4550] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"0559de8b-b04b-4f89-a57d-6943cdbc6076", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e", Pod:"calico-apiserver-66cb4b4698-4s2rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf26282e020", MAC:"c2:ef:51:b6:d2:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:08.574859 containerd[1461]: 2025-05-13 04:22:08.571 [INFO][4550] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e" Namespace="calico-apiserver" Pod="calico-apiserver-66cb4b4698-4s2rd" WorkloadEndpoint="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:08.604940 containerd[1461]: time="2025-05-13T04:22:08.604736414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 04:22:08.604940 containerd[1461]: time="2025-05-13T04:22:08.604795053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 04:22:08.604940 containerd[1461]: time="2025-05-13T04:22:08.604815430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:08.605134 containerd[1461]: time="2025-05-13T04:22:08.605049413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 04:22:08.626116 systemd[1]: Started cri-containerd-c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e.scope - libcontainer container c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e. May 13 04:22:08.673109 containerd[1461]: time="2025-05-13T04:22:08.673073066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66cb4b4698-4s2rd,Uid:0559de8b-b04b-4f89-a57d-6943cdbc6076,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e\"" May 13 04:22:08.810239 systemd-networkd[1374]: calibb119b8c4ea: Gained IPv6LL May 13 04:22:08.874248 systemd-networkd[1374]: cali051dd3f6b74: Gained IPv6LL May 13 04:22:08.944278 systemd[1]: run-netns-cni\x2dce7c1692\x2d5d17\x2d8ef4\x2dd1b9\x2dfcb56f652cd8.mount: Deactivated successfully. May 13 04:22:09.021838 kubelet[2587]: I0513 04:22:09.021773 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wtbsq" podStartSLOduration=40.021756086 podStartE2EDuration="40.021756086s" podCreationTimestamp="2025-05-13 04:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:22:09.005930298 +0000 UTC m=+43.470105678" watchObservedRunningTime="2025-05-13 04:22:09.021756086 +0000 UTC m=+43.485931466" May 13 04:22:09.050330 kubelet[2587]: I0513 04:22:09.050201 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-69vql" podStartSLOduration=40.050181536 podStartE2EDuration="40.050181536s" podCreationTimestamp="2025-05-13 04:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 04:22:09.023069487 +0000 UTC m=+43.487244867" watchObservedRunningTime="2025-05-13 04:22:09.050181536 +0000 UTC m=+43.514356916" May 13 04:22:09.066198 systemd-networkd[1374]: calid97eb2b02ec: Gained IPv6LL May 13 04:22:09.258571 systemd-networkd[1374]: cali3c6fa05ff04: Gained IPv6LL May 13 04:22:09.259689 systemd-networkd[1374]: calic16d654aae2: Gained IPv6LL May 13 04:22:10.538143 systemd-networkd[1374]: calicf26282e020: Gained IPv6LL May 13 04:22:12.408344 containerd[1461]: time="2025-05-13T04:22:12.408133733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:12.409989 containerd[1461]: time="2025-05-13T04:22:12.409769006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 04:22:12.411524 containerd[1461]: time="2025-05-13T04:22:12.411209178Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:12.414082 containerd[1461]: time="2025-05-13T04:22:12.414047882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:12.414852 containerd[1461]: time="2025-05-13T04:22:12.414795568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.28188872s" May 13 04:22:12.414903 containerd[1461]: time="2025-05-13T04:22:12.414853666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 04:22:12.416186 containerd[1461]: time="2025-05-13T04:22:12.416153517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 04:22:12.427634 containerd[1461]: time="2025-05-13T04:22:12.427598983Z" level=info msg="CreateContainer within sandbox \"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 04:22:12.457698 containerd[1461]: time="2025-05-13T04:22:12.457651036Z" level=info msg="CreateContainer within sandbox \"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1\"" May 13 04:22:12.459539 containerd[1461]: time="2025-05-13T04:22:12.458455107Z" level=info msg="StartContainer for \"b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1\"" May 13 04:22:12.489117 systemd[1]: Started cri-containerd-b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1.scope - libcontainer container b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1. May 13 04:22:12.539346 containerd[1461]: time="2025-05-13T04:22:12.539303684Z" level=info msg="StartContainer for \"b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1\" returns successfully" May 13 04:22:13.036027 kubelet[2587]: I0513 04:22:13.035244 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69fbd7ff65-7dt7j" podStartSLOduration=32.751595573 podStartE2EDuration="37.035206994s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:22:08.132376538 +0000 UTC m=+42.596551928" lastFinishedPulling="2025-05-13 04:22:12.415987949 +0000 UTC m=+46.880163349" observedRunningTime="2025-05-13 04:22:13.032981904 +0000 UTC m=+47.497157294" watchObservedRunningTime="2025-05-13 04:22:13.035206994 +0000 UTC m=+47.499382424" May 13 04:22:14.015072 kubelet[2587]: I0513 04:22:14.014858 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:22:14.609949 containerd[1461]: time="2025-05-13T04:22:14.609671446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:14.611551 containerd[1461]: time="2025-05-13T04:22:14.611344533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 04:22:14.613056 containerd[1461]: time="2025-05-13T04:22:14.613021749Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:14.615911 containerd[1461]: time="2025-05-13T04:22:14.615852285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:14.616918 containerd[1461]: time="2025-05-13T04:22:14.616543349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.200358644s" May 13 04:22:14.616918 containerd[1461]: time="2025-05-13T04:22:14.616577552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 04:22:14.618183 containerd[1461]: time="2025-05-13T04:22:14.618152827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 04:22:14.620157 containerd[1461]: time="2025-05-13T04:22:14.619933664Z" level=info msg="CreateContainer within sandbox \"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 04:22:14.646973 containerd[1461]: time="2025-05-13T04:22:14.646908088Z" level=info msg="CreateContainer within sandbox \"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90c98a74592cf429c539490b5cd3e5c319e1e335da20db64ad0e0be8afd67c70\"" May 13 04:22:14.648913 containerd[1461]: time="2025-05-13T04:22:14.647574987Z" level=info msg="StartContainer for \"90c98a74592cf429c539490b5cd3e5c319e1e335da20db64ad0e0be8afd67c70\"" May 13 04:22:14.685130 systemd[1]: Started cri-containerd-90c98a74592cf429c539490b5cd3e5c319e1e335da20db64ad0e0be8afd67c70.scope - libcontainer container 90c98a74592cf429c539490b5cd3e5c319e1e335da20db64ad0e0be8afd67c70. May 13 04:22:14.719538 containerd[1461]: time="2025-05-13T04:22:14.719417333Z" level=info msg="StartContainer for \"90c98a74592cf429c539490b5cd3e5c319e1e335da20db64ad0e0be8afd67c70\" returns successfully" May 13 04:22:16.952055 kubelet[2587]: I0513 04:22:16.950655 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:22:19.996062 containerd[1461]: time="2025-05-13T04:22:19.995954932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:19.997600 containerd[1461]: time="2025-05-13T04:22:19.997534534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 04:22:20.000409 containerd[1461]: time="2025-05-13T04:22:19.999039325Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:20.002201 containerd[1461]: time="2025-05-13T04:22:20.002078927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:20.002939 containerd[1461]: time="2025-05-13T04:22:20.002907179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.384719187s" May 13 04:22:20.003112 containerd[1461]: time="2025-05-13T04:22:20.003030089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 04:22:20.006560 containerd[1461]: time="2025-05-13T04:22:20.006196377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 04:22:20.007301 containerd[1461]: time="2025-05-13T04:22:20.007280077Z" level=info msg="CreateContainer within sandbox \"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 04:22:20.030665 containerd[1461]: time="2025-05-13T04:22:20.030609468Z" level=info msg="CreateContainer within sandbox \"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4b1b99c3d3737a6bf8b359529c79927a171c81bb138e9c171977f238dafe80b5\"" May 13 04:22:20.033031 containerd[1461]: time="2025-05-13T04:22:20.032937095Z" level=info msg="StartContainer for \"4b1b99c3d3737a6bf8b359529c79927a171c81bb138e9c171977f238dafe80b5\"" May 13 04:22:20.091117 systemd[1]: Started cri-containerd-4b1b99c3d3737a6bf8b359529c79927a171c81bb138e9c171977f238dafe80b5.scope - libcontainer container 4b1b99c3d3737a6bf8b359529c79927a171c81bb138e9c171977f238dafe80b5. May 13 04:22:20.131889 containerd[1461]: time="2025-05-13T04:22:20.131657641Z" level=info msg="StartContainer for \"4b1b99c3d3737a6bf8b359529c79927a171c81bb138e9c171977f238dafe80b5\" returns successfully" May 13 04:22:20.517208 containerd[1461]: time="2025-05-13T04:22:20.517119905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:20.518220 containerd[1461]: time="2025-05-13T04:22:20.518166565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 04:22:20.523923 containerd[1461]: time="2025-05-13T04:22:20.523859892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 517.583285ms" May 13 04:22:20.524541 containerd[1461]: time="2025-05-13T04:22:20.524181101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 04:22:20.529422 containerd[1461]: time="2025-05-13T04:22:20.527590782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 04:22:20.544849 containerd[1461]: time="2025-05-13T04:22:20.544449831Z" level=info msg="CreateContainer within sandbox \"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 04:22:20.608711 containerd[1461]: time="2025-05-13T04:22:20.608203934Z" level=info msg="CreateContainer within sandbox \"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"821ada18682725167a7d582b5a5b09d95ee963715de14dabce3b19012cd872a0\"" May 13 04:22:20.611171 containerd[1461]: time="2025-05-13T04:22:20.610797416Z" level=info msg="StartContainer for \"821ada18682725167a7d582b5a5b09d95ee963715de14dabce3b19012cd872a0\"" May 13 04:22:20.655110 systemd[1]: Started cri-containerd-821ada18682725167a7d582b5a5b09d95ee963715de14dabce3b19012cd872a0.scope - libcontainer container 821ada18682725167a7d582b5a5b09d95ee963715de14dabce3b19012cd872a0. May 13 04:22:20.706389 containerd[1461]: time="2025-05-13T04:22:20.706136372Z" level=info msg="StartContainer for \"821ada18682725167a7d582b5a5b09d95ee963715de14dabce3b19012cd872a0\" returns successfully" May 13 04:22:21.027502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023074077.mount: Deactivated successfully. May 13 04:22:21.064749 kubelet[2587]: I0513 04:22:21.064159 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66cb4b4698-8cvfj" podStartSLOduration=33.356196544 podStartE2EDuration="45.064142535s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:22:08.296805835 +0000 UTC m=+42.760981215" lastFinishedPulling="2025-05-13 04:22:20.004751826 +0000 UTC m=+54.468927206" observedRunningTime="2025-05-13 04:22:21.062568802 +0000 UTC m=+55.526744192" watchObservedRunningTime="2025-05-13 04:22:21.064142535 +0000 UTC m=+55.528317915" May 13 04:22:21.083144 kubelet[2587]: I0513 04:22:21.083054 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66cb4b4698-4s2rd" podStartSLOduration=33.231552559 podStartE2EDuration="45.083033234s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:22:08.674836475 +0000 UTC m=+43.139011865" lastFinishedPulling="2025-05-13 04:22:20.52631715 +0000 UTC m=+54.990492540" observedRunningTime="2025-05-13 04:22:21.082620684 +0000 UTC m=+55.546796064" watchObservedRunningTime="2025-05-13 04:22:21.083033234 +0000 UTC m=+55.547208614" May 13 04:22:22.051858 kubelet[2587]: I0513 04:22:22.051453 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:22:23.027980 containerd[1461]: time="2025-05-13T04:22:23.027903508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:23.030083 containerd[1461]: time="2025-05-13T04:22:23.029693586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 04:22:23.031691 containerd[1461]: time="2025-05-13T04:22:23.031630659Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:23.035199 containerd[1461]: time="2025-05-13T04:22:23.035150765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 04:22:23.036308 containerd[1461]: time="2025-05-13T04:22:23.036193690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.50854515s" May 13 04:22:23.036308 containerd[1461]: time="2025-05-13T04:22:23.036225119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 04:22:23.058261 containerd[1461]: time="2025-05-13T04:22:23.057908533Z" level=info msg="CreateContainer within sandbox \"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 04:22:23.083419 containerd[1461]: time="2025-05-13T04:22:23.083313899Z" level=info msg="CreateContainer within sandbox \"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65\"" May 13 04:22:23.085063 containerd[1461]: time="2025-05-13T04:22:23.083836213Z" level=info msg="StartContainer for \"28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65\"" May 13 04:22:23.116513 systemd[1]: run-containerd-runc-k8s.io-28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65-runc.9Ins6D.mount: Deactivated successfully. May 13 04:22:23.123284 systemd[1]: Started cri-containerd-28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65.scope - libcontainer container 28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65. May 13 04:22:23.163302 containerd[1461]: time="2025-05-13T04:22:23.163245367Z" level=info msg="StartContainer for \"28faeee5f1a8eee6d916c1c373ee68869e6c99ec7d01e89e96cbfa4d95c04a65\" returns successfully" May 13 04:22:23.800279 kubelet[2587]: I0513 04:22:23.800203 2587 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 04:22:23.800279 kubelet[2587]: I0513 04:22:23.800272 2587 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 04:22:24.129353 kubelet[2587]: I0513 04:22:24.129157 2587 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zcwgb" podStartSLOduration=33.275341359 podStartE2EDuration="48.129086542s" podCreationTimestamp="2025-05-13 04:21:36 +0000 UTC" firstStartedPulling="2025-05-13 04:22:08.183777336 +0000 UTC m=+42.647952716" lastFinishedPulling="2025-05-13 04:22:23.037522509 +0000 UTC m=+57.501697899" observedRunningTime="2025-05-13 04:22:24.126323738 +0000 UTC m=+58.590499178" watchObservedRunningTime="2025-05-13 04:22:24.129086542 +0000 UTC m=+58.593261973" May 13 04:22:26.136903 containerd[1461]: time="2025-05-13T04:22:26.136366077Z" level=info msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.210 [WARNING][4990] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"0559de8b-b04b-4f89-a57d-6943cdbc6076", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e", Pod:"calico-apiserver-66cb4b4698-4s2rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf26282e020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.210 [INFO][4990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.210 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" iface="eth0" netns="" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.210 [INFO][4990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.210 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.233 [INFO][4998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.233 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.233 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.241 [WARNING][4998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.241 [INFO][4998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.243 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.245701 containerd[1461]: 2025-05-13 04:22:26.244 [INFO][4990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.245701 containerd[1461]: time="2025-05-13T04:22:26.245552189Z" level=info msg="TearDown network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" successfully" May 13 04:22:26.245701 containerd[1461]: time="2025-05-13T04:22:26.245598515Z" level=info msg="StopPodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" returns successfully" May 13 04:22:26.246860 containerd[1461]: time="2025-05-13T04:22:26.246481424Z" level=info msg="RemovePodSandbox for \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" May 13 04:22:26.246860 containerd[1461]: time="2025-05-13T04:22:26.246507792Z" level=info msg="Forcibly stopping sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\"" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.287 [WARNING][5016] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"0559de8b-b04b-4f89-a57d-6943cdbc6076", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"c93b272025b6135d93c91746ef889d41d9e09bc08a5897ca483cee1ca00ea26e", Pod:"calico-apiserver-66cb4b4698-4s2rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf26282e020", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.288 [INFO][5016] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.288 [INFO][5016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" iface="eth0" netns="" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.288 [INFO][5016] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.288 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.309 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.310 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.310 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.318 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.318 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" HandleID="k8s-pod-network.c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--4s2rd-eth0" May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.320 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.322690 containerd[1461]: 2025-05-13 04:22:26.321 [INFO][5016] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225" May 13 04:22:26.323198 containerd[1461]: time="2025-05-13T04:22:26.322725572Z" level=info msg="TearDown network for sandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" successfully" May 13 04:22:26.327806 containerd[1461]: time="2025-05-13T04:22:26.327771713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:26.327999 containerd[1461]: time="2025-05-13T04:22:26.327830243Z" level=info msg="RemovePodSandbox \"c901889e060bf93b24bece6c142bfc55da829ffa50b071933657b28dfc369225\" returns successfully" May 13 04:22:26.328848 containerd[1461]: time="2025-05-13T04:22:26.328539647Z" level=info msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.367 [WARNING][5042] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6", Pod:"coredns-6f6b679f8f-69vql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb119b8c4ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.367 [INFO][5042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.367 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" iface="eth0" netns="" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.367 [INFO][5042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.367 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.389 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.389 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.389 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.398 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.398 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.400 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.402692 containerd[1461]: 2025-05-13 04:22:26.401 [INFO][5042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.402692 containerd[1461]: time="2025-05-13T04:22:26.402637007Z" level=info msg="TearDown network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" successfully" May 13 04:22:26.402692 containerd[1461]: time="2025-05-13T04:22:26.402664979Z" level=info msg="StopPodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" returns successfully" May 13 04:22:26.403927 containerd[1461]: time="2025-05-13T04:22:26.403152620Z" level=info msg="RemovePodSandbox for \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" May 13 04:22:26.403927 containerd[1461]: time="2025-05-13T04:22:26.403175723Z" level=info msg="Forcibly stopping sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\"" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.445 [WARNING][5069] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5ec8c59e-3a9a-4a96-81ba-639baf60f4aa", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"6779e1150d58487985f0aa6af9b536078213a8ddafc7ede113502dacf6d30dc6", Pod:"coredns-6f6b679f8f-69vql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb119b8c4ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.445 [INFO][5069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.445 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" iface="eth0" netns="" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.445 [INFO][5069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.445 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.469 [INFO][5076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.469 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.469 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.477 [WARNING][5076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.477 [INFO][5076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" HandleID="k8s-pod-network.e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--69vql-eth0" May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.478 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.480986 containerd[1461]: 2025-05-13 04:22:26.479 [INFO][5069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371" May 13 04:22:26.482022 containerd[1461]: time="2025-05-13T04:22:26.481109256Z" level=info msg="TearDown network for sandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" successfully" May 13 04:22:26.486358 containerd[1461]: time="2025-05-13T04:22:26.486204219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:26.486358 containerd[1461]: time="2025-05-13T04:22:26.486273157Z" level=info msg="RemovePodSandbox \"e66d27eb119cb122e34685a5b6c37fa387ce6d84f9b75ad404299770bb6a6371\" returns successfully" May 13 04:22:26.487155 containerd[1461]: time="2025-05-13T04:22:26.486892554Z" level=info msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.526 [WARNING][5094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b475fce-0503-4d3c-9f11-a776dc4b6dcc", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f", Pod:"csi-node-driver-zcwgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16d654aae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.527 [INFO][5094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.527 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" iface="eth0" netns="" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.527 [INFO][5094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.527 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.553 [INFO][5101] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.553 [INFO][5101] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.553 [INFO][5101] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.563 [WARNING][5101] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.563 [INFO][5101] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.566 [INFO][5101] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.569108 containerd[1461]: 2025-05-13 04:22:26.567 [INFO][5094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.570389 containerd[1461]: time="2025-05-13T04:22:26.569614245Z" level=info msg="TearDown network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" successfully" May 13 04:22:26.570389 containerd[1461]: time="2025-05-13T04:22:26.569667064Z" level=info msg="StopPodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" returns successfully" May 13 04:22:26.570389 containerd[1461]: time="2025-05-13T04:22:26.570107356Z" level=info msg="RemovePodSandbox for \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" May 13 04:22:26.570389 containerd[1461]: time="2025-05-13T04:22:26.570132463Z" level=info msg="Forcibly stopping sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\"" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.607 [WARNING][5119] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b475fce-0503-4d3c-9f11-a776dc4b6dcc", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"95c62c043ec3d0bbf3b0db5e6c45ca176a30ee1c0f701dd2275410f6c668c40f", Pod:"csi-node-driver-zcwgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic16d654aae2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.607 [INFO][5119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.607 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" iface="eth0" netns="" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.607 [INFO][5119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.607 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.631 [INFO][5127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.631 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.631 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.638 [WARNING][5127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.638 [INFO][5127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" HandleID="k8s-pod-network.cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-csi--node--driver--zcwgb-eth0" May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.640 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.642375 containerd[1461]: 2025-05-13 04:22:26.641 [INFO][5119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c" May 13 04:22:26.643303 containerd[1461]: time="2025-05-13T04:22:26.642867540Z" level=info msg="TearDown network for sandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" successfully" May 13 04:22:26.647628 containerd[1461]: time="2025-05-13T04:22:26.647474892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:26.647628 containerd[1461]: time="2025-05-13T04:22:26.647540114Z" level=info msg="RemovePodSandbox \"cf37a94f75d7584c7a49a87064007bfc2520e611ea0c2d246e84116fc88bb02c\" returns successfully" May 13 04:22:26.648097 containerd[1461]: time="2025-05-13T04:22:26.648066938Z" level=info msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.695 [WARNING][5145] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca42f9a3-6ff2-4160-851b-5223b0b75593", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62", Pod:"calico-apiserver-66cb4b4698-8cvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali051dd3f6b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.695 [INFO][5145] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.695 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" iface="eth0" netns="" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.695 [INFO][5145] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.696 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.724 [INFO][5152] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.724 [INFO][5152] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.724 [INFO][5152] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.735 [WARNING][5152] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.735 [INFO][5152] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.736 [INFO][5152] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.741353 containerd[1461]: 2025-05-13 04:22:26.737 [INFO][5145] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.741353 containerd[1461]: time="2025-05-13T04:22:26.739691923Z" level=info msg="TearDown network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" successfully" May 13 04:22:26.741353 containerd[1461]: time="2025-05-13T04:22:26.739715726Z" level=info msg="StopPodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" returns successfully" May 13 04:22:26.743839 containerd[1461]: time="2025-05-13T04:22:26.743089857Z" level=info msg="RemovePodSandbox for \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" May 13 04:22:26.743839 containerd[1461]: time="2025-05-13T04:22:26.743124481Z" level=info msg="Forcibly stopping sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\"" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.795 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0", GenerateName:"calico-apiserver-66cb4b4698-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca42f9a3-6ff2-4160-851b-5223b0b75593", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66cb4b4698", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"047298df11e705c125a16d66891d8af5d99bcf6579c7a82e25ba28f36f765e62", Pod:"calico-apiserver-66cb4b4698-8cvfj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali051dd3f6b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.797 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.797 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" iface="eth0" netns="" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.797 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.797 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.828 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.828 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.828 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.836 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.836 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" HandleID="k8s-pod-network.da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--apiserver--66cb4b4698--8cvfj-eth0" May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.838 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.840953 containerd[1461]: 2025-05-13 04:22:26.839 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30" May 13 04:22:26.841942 containerd[1461]: time="2025-05-13T04:22:26.841144405Z" level=info msg="TearDown network for sandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" successfully" May 13 04:22:26.846727 containerd[1461]: time="2025-05-13T04:22:26.846697432Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:26.846852 containerd[1461]: time="2025-05-13T04:22:26.846835460Z" level=info msg="RemovePodSandbox \"da4fb6eee347e9f420632f8ced79d1540b259a1e385aa29f589ed9eb547e7b30\" returns successfully" May 13 04:22:26.847428 containerd[1461]: time="2025-05-13T04:22:26.847410694Z" level=info msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.892 [WARNING][5195] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0", GenerateName:"calico-kube-controllers-69fbd7ff65-", Namespace:"calico-system", SelfLink:"", UID:"119e7f55-f555-420b-8c77-74d321630fd9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fbd7ff65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245", Pod:"calico-kube-controllers-69fbd7ff65-7dt7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c6fa05ff04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.892 [INFO][5195] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.892 [INFO][5195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" iface="eth0" netns="" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.892 [INFO][5195] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.892 [INFO][5195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.933 [INFO][5203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.933 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.933 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.941 [WARNING][5203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.941 [INFO][5203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.943 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:26.948302 containerd[1461]: 2025-05-13 04:22:26.947 [INFO][5195] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:26.950018 containerd[1461]: time="2025-05-13T04:22:26.949857183Z" level=info msg="TearDown network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" successfully" May 13 04:22:26.950018 containerd[1461]: time="2025-05-13T04:22:26.949885726Z" level=info msg="StopPodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" returns successfully" May 13 04:22:26.950705 containerd[1461]: time="2025-05-13T04:22:26.950383044Z" level=info msg="RemovePodSandbox for \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" May 13 04:22:26.950705 containerd[1461]: time="2025-05-13T04:22:26.950440041Z" level=info msg="Forcibly stopping sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\"" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.000 [WARNING][5221] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0", GenerateName:"calico-kube-controllers-69fbd7ff65-", Namespace:"calico-system", SelfLink:"", UID:"119e7f55-f555-420b-8c77-74d321630fd9", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69fbd7ff65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"8e2f6f9019cb19e056e64de5f7fec56b97af72560fbf9bc22ef2fb9114a49245", Pod:"calico-kube-controllers-69fbd7ff65-7dt7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c6fa05ff04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.001 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.001 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" iface="eth0" netns="" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.001 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.001 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.033 [INFO][5228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.033 [INFO][5228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.033 [INFO][5228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.044 [WARNING][5228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.044 [INFO][5228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" HandleID="k8s-pod-network.169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-calico--kube--controllers--69fbd7ff65--7dt7j-eth0" May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.047 [INFO][5228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:27.050895 containerd[1461]: 2025-05-13 04:22:27.048 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d" May 13 04:22:27.050895 containerd[1461]: time="2025-05-13T04:22:27.050173060Z" level=info msg="TearDown network for sandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" successfully" May 13 04:22:27.057338 containerd[1461]: time="2025-05-13T04:22:27.057149498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:27.057338 containerd[1461]: time="2025-05-13T04:22:27.057212026Z" level=info msg="RemovePodSandbox \"169ef3c8fcd61ed03c1c5202002e3fe97f3279bdba9b92c759d204cd2eb37d6d\" returns successfully" May 13 04:22:27.058338 containerd[1461]: time="2025-05-13T04:22:27.058191735Z" level=info msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.124 [WARNING][5247] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"180cf766-638d-4296-97c0-3a6bafc8c21a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a", Pod:"coredns-6f6b679f8f-wtbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid97eb2b02ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.124 [INFO][5247] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.124 [INFO][5247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" iface="eth0" netns="" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.124 [INFO][5247] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.124 [INFO][5247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.166 [INFO][5254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.166 [INFO][5254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.166 [INFO][5254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.178 [WARNING][5254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.178 [INFO][5254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.183 [INFO][5254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:27.187209 containerd[1461]: 2025-05-13 04:22:27.185 [INFO][5247] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.188126 containerd[1461]: time="2025-05-13T04:22:27.188056072Z" level=info msg="TearDown network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" successfully" May 13 04:22:27.188126 containerd[1461]: time="2025-05-13T04:22:27.188095405Z" level=info msg="StopPodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" returns successfully" May 13 04:22:27.190638 containerd[1461]: time="2025-05-13T04:22:27.190150764Z" level=info msg="RemovePodSandbox for \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" May 13 04:22:27.190638 containerd[1461]: time="2025-05-13T04:22:27.190196429Z" level=info msg="Forcibly stopping sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\"" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.252 [WARNING][5272] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"180cf766-638d-4296-97c0-3a6bafc8c21a", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 4, 21, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-3bdfb8ea63.novalocal", ContainerID:"e647a9c93fbc35344fdb62ad9c387bff39470f2e7c59d43cca96d10b55272d1a", Pod:"coredns-6f6b679f8f-wtbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid97eb2b02ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.252 [INFO][5272] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.252 [INFO][5272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" iface="eth0" netns="" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.252 [INFO][5272] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.253 [INFO][5272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.282 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.282 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.283 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.295 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.295 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" HandleID="k8s-pod-network.4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" Workload="ci--4081--3--3--n--3bdfb8ea63.novalocal-k8s-coredns--6f6b679f8f--wtbsq-eth0" May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.297 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 04:22:27.300695 containerd[1461]: 2025-05-13 04:22:27.299 [INFO][5272] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784" May 13 04:22:27.301926 containerd[1461]: time="2025-05-13T04:22:27.301824208Z" level=info msg="TearDown network for sandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" successfully" May 13 04:22:27.310396 containerd[1461]: time="2025-05-13T04:22:27.310281874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 13 04:22:27.310597 containerd[1461]: time="2025-05-13T04:22:27.310409522Z" level=info msg="RemovePodSandbox \"4c44926ac35ab59576f4366068039d4f03a98fbd309b731b184d53fb4ff84784\" returns successfully" May 13 04:22:29.514443 update_engine[1442]: I20250513 04:22:29.514223 1442 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 04:22:29.517047 update_engine[1442]: I20250513 04:22:29.514310 1442 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 04:22:29.517047 update_engine[1442]: I20250513 04:22:29.516551 1442 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 04:22:29.519079 update_engine[1442]: I20250513 04:22:29.519037 1442 omaha_request_params.cc:62] Current group set to lts May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.521986 1442 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522010 1442 update_attempter.cc:643] Scheduling an action processor start. May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522029 1442 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522073 1442 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522176 1442 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522187 1442 omaha_request_action.cc:272] Request: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: May 13 04:22:29.522200 update_engine[1442]: I20250513 04:22:29.522193 1442 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 04:22:29.524101 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 04:22:29.526609 update_engine[1442]: I20250513 04:22:29.526561 1442 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 04:22:29.526887 update_engine[1442]: I20250513 04:22:29.526844 1442 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 04:22:29.539636 update_engine[1442]: E20250513 04:22:29.539548 1442 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 04:22:29.539877 update_engine[1442]: I20250513 04:22:29.539651 1442 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 04:22:39.518337 update_engine[1442]: I20250513 04:22:39.518208 1442 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 04:22:39.519042 update_engine[1442]: I20250513 04:22:39.518631 1442 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 04:22:39.519831 update_engine[1442]: I20250513 04:22:39.519769 1442 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 04:22:39.529910 update_engine[1442]: E20250513 04:22:39.529760 1442 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 04:22:39.529910 update_engine[1442]: I20250513 04:22:39.529875 1442 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 04:22:43.171730 kubelet[2587]: I0513 04:22:43.171226 2587 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 04:22:49.519056 update_engine[1442]: I20250513 04:22:49.518894 1442 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 04:22:49.520769 update_engine[1442]: I20250513 04:22:49.520138 1442 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 04:22:49.520769 update_engine[1442]: I20250513 04:22:49.520696 1442 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 04:22:49.530845 update_engine[1442]: E20250513 04:22:49.530679 1442 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 04:22:49.530845 update_engine[1442]: I20250513 04:22:49.530779 1442 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 04:22:59.521473 update_engine[1442]: I20250513 04:22:59.518805 1442 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 04:22:59.521473 update_engine[1442]: I20250513 04:22:59.519280 1442 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 04:22:59.521473 update_engine[1442]: I20250513 04:22:59.519661 1442 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 04:22:59.531000 update_engine[1442]: E20250513 04:22:59.530092 1442 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530198 1442 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530217 1442 omaha_request_action.cc:617] Omaha request response: May 13 04:22:59.531000 update_engine[1442]: E20250513 04:22:59.530423 1442 omaha_request_action.cc:636] Omaha request network transfer failed. May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530467 1442 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530479 1442 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530490 1442 update_attempter.cc:306] Processing Done. May 13 04:22:59.531000 update_engine[1442]: E20250513 04:22:59.530514 1442 update_attempter.cc:619] Update failed. May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530526 1442 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530537 1442 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530548 1442 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530695 1442 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530742 1442 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 04:22:59.531000 update_engine[1442]: I20250513 04:22:59.530756 1442 omaha_request_action.cc:272] Request: May 13 04:22:59.531000 update_engine[1442]: May 13 04:22:59.531000 update_engine[1442]: May 13 04:22:59.531942 update_engine[1442]: May 13 04:22:59.531942 update_engine[1442]: May 13 04:22:59.531942 update_engine[1442]: May 13 04:22:59.531942 update_engine[1442]: May 13 04:22:59.531942 update_engine[1442]: I20250513 04:22:59.530771 1442 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 04:22:59.534384 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 04:22:59.534890 update_engine[1442]: I20250513 04:22:59.533178 1442 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 04:22:59.534890 update_engine[1442]: I20250513 04:22:59.533520 1442 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 04:22:59.544181 update_engine[1442]: E20250513 04:22:59.543538 1442 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543637 1442 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543651 1442 omaha_request_action.cc:617] Omaha request response: May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543663 1442 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543673 1442 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543681 1442 update_attempter.cc:306] Processing Done. May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543692 1442 update_attempter.cc:310] Error event sent. May 13 04:22:59.544181 update_engine[1442]: I20250513 04:22:59.543709 1442 update_check_scheduler.cc:74] Next update check in 46m40s May 13 04:22:59.545385 locksmithd[1473]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 04:23:53.241702 systemd[1]: run-containerd-runc-k8s.io-cf46c4791df89e8855357ebd5cab5e4ffa52c8efc6b1836c53588ef7c92a6cf2-runc.gY9NwD.mount: Deactivated successfully. May 13 04:25:46.523668 systemd[1]: Started sshd@9-172.24.4.57:22-172.24.4.1:38664.service - OpenSSH per-connection server daemon (172.24.4.1:38664). May 13 04:25:47.030826 systemd[1]: run-containerd-runc-k8s.io-b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1-runc.uKW72m.mount: Deactivated successfully. May 13 04:25:47.906710 sshd[5677]: Accepted publickey for core from 172.24.4.1 port 38664 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:25:47.909595 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:25:47.926438 systemd-logind[1439]: New session 12 of user core. May 13 04:25:47.933442 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 04:25:48.866178 sshd[5677]: pam_unix(sshd:session): session closed for user core May 13 04:25:48.876464 systemd[1]: sshd@9-172.24.4.57:22-172.24.4.1:38664.service: Deactivated successfully. May 13 04:25:48.883716 systemd[1]: session-12.scope: Deactivated successfully. May 13 04:25:48.885917 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. May 13 04:25:48.892809 systemd-logind[1439]: Removed session 12. May 13 04:25:53.893460 systemd[1]: Started sshd@10-172.24.4.57:22-172.24.4.1:44706.service - OpenSSH per-connection server daemon (172.24.4.1:44706). May 13 04:25:55.123707 sshd[5730]: Accepted publickey for core from 172.24.4.1 port 44706 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:25:55.133920 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:25:55.164232 systemd-logind[1439]: New session 13 of user core. May 13 04:25:55.169821 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 04:25:55.984383 sshd[5730]: pam_unix(sshd:session): session closed for user core May 13 04:25:55.994388 systemd[1]: sshd@10-172.24.4.57:22-172.24.4.1:44706.service: Deactivated successfully. May 13 04:25:56.001255 systemd[1]: session-13.scope: Deactivated successfully. May 13 04:25:56.004151 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. May 13 04:25:56.007305 systemd-logind[1439]: Removed session 13. May 13 04:26:01.022734 systemd[1]: Started sshd@11-172.24.4.57:22-172.24.4.1:44722.service - OpenSSH per-connection server daemon (172.24.4.1:44722). May 13 04:26:02.326245 sshd[5744]: Accepted publickey for core from 172.24.4.1 port 44722 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:02.328459 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:02.346151 systemd-logind[1439]: New session 14 of user core. May 13 04:26:02.358375 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 04:26:03.059494 sshd[5744]: pam_unix(sshd:session): session closed for user core May 13 04:26:03.075748 systemd[1]: sshd@11-172.24.4.57:22-172.24.4.1:44722.service: Deactivated successfully. May 13 04:26:03.084603 systemd[1]: session-14.scope: Deactivated successfully. May 13 04:26:03.090459 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. May 13 04:26:03.106752 systemd[1]: Started sshd@12-172.24.4.57:22-172.24.4.1:44726.service - OpenSSH per-connection server daemon (172.24.4.1:44726). May 13 04:26:03.115568 systemd-logind[1439]: Removed session 14. May 13 04:26:04.184676 sshd[5759]: Accepted publickey for core from 172.24.4.1 port 44726 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:04.187787 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:04.200824 systemd-logind[1439]: New session 15 of user core. May 13 04:26:04.211335 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 04:26:05.010994 sshd[5759]: pam_unix(sshd:session): session closed for user core May 13 04:26:05.026868 systemd[1]: sshd@12-172.24.4.57:22-172.24.4.1:44726.service: Deactivated successfully. May 13 04:26:05.033273 systemd[1]: session-15.scope: Deactivated successfully. May 13 04:26:05.040033 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. May 13 04:26:05.047449 systemd[1]: Started sshd@13-172.24.4.57:22-172.24.4.1:52990.service - OpenSSH per-connection server daemon (172.24.4.1:52990). May 13 04:26:05.051161 systemd-logind[1439]: Removed session 15. May 13 04:26:06.319583 sshd[5770]: Accepted publickey for core from 172.24.4.1 port 52990 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:06.323435 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:06.333953 systemd-logind[1439]: New session 16 of user core. May 13 04:26:06.343575 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 04:26:06.999897 sshd[5770]: pam_unix(sshd:session): session closed for user core May 13 04:26:07.008133 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. May 13 04:26:07.008935 systemd[1]: sshd@13-172.24.4.57:22-172.24.4.1:52990.service: Deactivated successfully. May 13 04:26:07.014556 systemd[1]: session-16.scope: Deactivated successfully. May 13 04:26:07.017681 systemd-logind[1439]: Removed session 16. May 13 04:26:12.026325 systemd[1]: Started sshd@14-172.24.4.57:22-172.24.4.1:52996.service - OpenSSH per-connection server daemon (172.24.4.1:52996). May 13 04:26:13.305860 sshd[5806]: Accepted publickey for core from 172.24.4.1 port 52996 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:13.311436 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:13.324108 systemd-logind[1439]: New session 17 of user core. May 13 04:26:13.332313 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 04:26:14.221574 sshd[5806]: pam_unix(sshd:session): session closed for user core May 13 04:26:14.235092 systemd[1]: sshd@14-172.24.4.57:22-172.24.4.1:52996.service: Deactivated successfully. May 13 04:26:14.242853 systemd[1]: session-17.scope: Deactivated successfully. May 13 04:26:14.246555 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. May 13 04:26:14.250295 systemd-logind[1439]: Removed session 17. May 13 04:26:17.042389 systemd[1]: run-containerd-runc-k8s.io-b0b077c42b45775bc2d8e7597bce7da0f484a4fc9c3bfee0fc38455415399cc1-runc.DUQ0gR.mount: Deactivated successfully. May 13 04:26:19.254663 systemd[1]: Started sshd@15-172.24.4.57:22-172.24.4.1:43042.service - OpenSSH per-connection server daemon (172.24.4.1:43042). May 13 04:26:20.527078 sshd[5840]: Accepted publickey for core from 172.24.4.1 port 43042 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:20.531585 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:20.539104 systemd-logind[1439]: New session 18 of user core. May 13 04:26:20.547182 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 04:26:21.291582 sshd[5840]: pam_unix(sshd:session): session closed for user core May 13 04:26:21.295738 systemd[1]: sshd@15-172.24.4.57:22-172.24.4.1:43042.service: Deactivated successfully. May 13 04:26:21.298747 systemd[1]: session-18.scope: Deactivated successfully. May 13 04:26:21.299939 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. May 13 04:26:21.301494 systemd-logind[1439]: Removed session 18. May 13 04:26:26.315612 systemd[1]: Started sshd@16-172.24.4.57:22-172.24.4.1:34560.service - OpenSSH per-connection server daemon (172.24.4.1:34560). May 13 04:26:27.646366 sshd[5877]: Accepted publickey for core from 172.24.4.1 port 34560 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:27.649680 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:27.664556 systemd-logind[1439]: New session 19 of user core. May 13 04:26:27.672340 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 04:26:28.564798 sshd[5877]: pam_unix(sshd:session): session closed for user core May 13 04:26:28.584004 systemd[1]: sshd@16-172.24.4.57:22-172.24.4.1:34560.service: Deactivated successfully. May 13 04:26:28.591805 systemd[1]: session-19.scope: Deactivated successfully. May 13 04:26:28.594724 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. May 13 04:26:28.600162 systemd-logind[1439]: Removed session 19. May 13 04:26:28.608095 systemd[1]: Started sshd@17-172.24.4.57:22-172.24.4.1:34566.service - OpenSSH per-connection server daemon (172.24.4.1:34566). May 13 04:26:30.020940 sshd[5891]: Accepted publickey for core from 172.24.4.1 port 34566 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:30.024431 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:30.038092 systemd-logind[1439]: New session 20 of user core. May 13 04:26:30.043298 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 04:26:30.998338 sshd[5891]: pam_unix(sshd:session): session closed for user core May 13 04:26:31.011391 systemd[1]: sshd@17-172.24.4.57:22-172.24.4.1:34566.service: Deactivated successfully. May 13 04:26:31.017890 systemd[1]: session-20.scope: Deactivated successfully. May 13 04:26:31.019997 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. May 13 04:26:31.030590 systemd[1]: Started sshd@18-172.24.4.57:22-172.24.4.1:34570.service - OpenSSH per-connection server daemon (172.24.4.1:34570). May 13 04:26:31.034053 systemd-logind[1439]: Removed session 20. May 13 04:26:32.164056 sshd[5903]: Accepted publickey for core from 172.24.4.1 port 34570 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:32.167342 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:32.184426 systemd-logind[1439]: New session 21 of user core. May 13 04:26:32.190767 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 04:26:35.677005 sshd[5903]: pam_unix(sshd:session): session closed for user core May 13 04:26:35.694920 systemd[1]: sshd@18-172.24.4.57:22-172.24.4.1:34570.service: Deactivated successfully. May 13 04:26:35.698223 systemd[1]: session-21.scope: Deactivated successfully. May 13 04:26:35.700742 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. May 13 04:26:35.706297 systemd[1]: Started sshd@19-172.24.4.57:22-172.24.4.1:33064.service - OpenSSH per-connection server daemon (172.24.4.1:33064). May 13 04:26:35.708333 systemd-logind[1439]: Removed session 21. May 13 04:26:37.057657 sshd[5934]: Accepted publickey for core from 172.24.4.1 port 33064 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:37.062491 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:37.081053 systemd-logind[1439]: New session 22 of user core. May 13 04:26:37.089318 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 04:26:38.115946 sshd[5934]: pam_unix(sshd:session): session closed for user core May 13 04:26:38.135949 systemd[1]: sshd@19-172.24.4.57:22-172.24.4.1:33064.service: Deactivated successfully. May 13 04:26:38.141778 systemd[1]: session-22.scope: Deactivated successfully. May 13 04:26:38.146778 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. May 13 04:26:38.155638 systemd[1]: Started sshd@20-172.24.4.57:22-172.24.4.1:33080.service - OpenSSH per-connection server daemon (172.24.4.1:33080). May 13 04:26:38.165363 systemd-logind[1439]: Removed session 22. May 13 04:26:39.225027 sshd[5944]: Accepted publickey for core from 172.24.4.1 port 33080 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:39.227863 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:39.240792 systemd-logind[1439]: New session 23 of user core. May 13 04:26:39.245356 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 04:26:40.055577 sshd[5944]: pam_unix(sshd:session): session closed for user core May 13 04:26:40.064116 systemd[1]: sshd@20-172.24.4.57:22-172.24.4.1:33080.service: Deactivated successfully. May 13 04:26:40.072848 systemd[1]: session-23.scope: Deactivated successfully. May 13 04:26:40.076472 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. May 13 04:26:40.079096 systemd-logind[1439]: Removed session 23. May 13 04:26:45.082681 systemd[1]: Started sshd@21-172.24.4.57:22-172.24.4.1:49592.service - OpenSSH per-connection server daemon (172.24.4.1:49592). May 13 04:26:46.266548 sshd[5961]: Accepted publickey for core from 172.24.4.1 port 49592 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:46.269916 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:46.310627 systemd-logind[1439]: New session 24 of user core. May 13 04:26:46.321389 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 04:26:47.059246 sshd[5961]: pam_unix(sshd:session): session closed for user core May 13 04:26:47.063309 systemd[1]: sshd@21-172.24.4.57:22-172.24.4.1:49592.service: Deactivated successfully. May 13 04:26:47.065399 systemd[1]: session-24.scope: Deactivated successfully. May 13 04:26:47.068535 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. May 13 04:26:47.073666 systemd-logind[1439]: Removed session 24. May 13 04:26:52.080567 systemd[1]: Started sshd@22-172.24.4.57:22-172.24.4.1:49604.service - OpenSSH per-connection server daemon (172.24.4.1:49604). May 13 04:26:53.045852 sshd[5999]: Accepted publickey for core from 172.24.4.1 port 49604 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:53.050135 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:26:53.063309 systemd-logind[1439]: New session 25 of user core. May 13 04:26:53.072610 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 04:26:53.780175 sshd[5999]: pam_unix(sshd:session): session closed for user core May 13 04:26:53.789664 systemd[1]: sshd@22-172.24.4.57:22-172.24.4.1:49604.service: Deactivated successfully. May 13 04:26:53.796443 systemd[1]: session-25.scope: Deactivated successfully. May 13 04:26:53.799249 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. May 13 04:26:53.802515 systemd-logind[1439]: Removed session 25. May 13 04:26:58.823427 systemd[1]: Started sshd@23-172.24.4.57:22-172.24.4.1:36674.service - OpenSSH per-connection server daemon (172.24.4.1:36674). May 13 04:26:59.993671 sshd[6035]: Accepted publickey for core from 172.24.4.1 port 36674 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:26:59.999102 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:27:00.019424 systemd-logind[1439]: New session 26 of user core. May 13 04:27:00.029409 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 04:27:00.750498 sshd[6035]: pam_unix(sshd:session): session closed for user core May 13 04:27:00.758554 systemd[1]: sshd@23-172.24.4.57:22-172.24.4.1:36674.service: Deactivated successfully. May 13 04:27:00.766649 systemd[1]: session-26.scope: Deactivated successfully. May 13 04:27:00.770872 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. May 13 04:27:00.774133 systemd-logind[1439]: Removed session 26. May 13 04:27:05.778784 systemd[1]: Started sshd@24-172.24.4.57:22-172.24.4.1:39972.service - OpenSSH per-connection server daemon (172.24.4.1:39972). May 13 04:27:06.958126 sshd[6072]: Accepted publickey for core from 172.24.4.1 port 39972 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:27:06.962209 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:27:06.978087 systemd-logind[1439]: New session 27 of user core. May 13 04:27:06.987225 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 04:27:07.733904 sshd[6072]: pam_unix(sshd:session): session closed for user core May 13 04:27:07.741178 systemd[1]: sshd@24-172.24.4.57:22-172.24.4.1:39972.service: Deactivated successfully. May 13 04:27:07.749899 systemd[1]: session-27.scope: Deactivated successfully. May 13 04:27:07.760722 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. May 13 04:27:07.764441 systemd-logind[1439]: Removed session 27. May 13 04:27:12.766941 systemd[1]: Started sshd@25-172.24.4.57:22-172.24.4.1:39980.service - OpenSSH per-connection server daemon (172.24.4.1:39980). May 13 04:27:13.941509 sshd[6085]: Accepted publickey for core from 172.24.4.1 port 39980 ssh2: RSA SHA256:SaG5MESIv/g0oWPZSlhItfSVTW88TTmUIzdugBL9u+Y May 13 04:27:13.945814 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 04:27:13.958878 systemd-logind[1439]: New session 28 of user core. May 13 04:27:13.967345 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 04:27:14.715428 sshd[6085]: pam_unix(sshd:session): session closed for user core May 13 04:27:14.723564 systemd[1]: sshd@25-172.24.4.57:22-172.24.4.1:39980.service: Deactivated successfully. May 13 04:27:14.729904 systemd[1]: session-28.scope: Deactivated successfully. May 13 04:27:14.736359 systemd-logind[1439]: Session 28 logged out. Waiting for processes to exit. May 13 04:27:14.741276 systemd-logind[1439]: Removed session 28.